Beatameister

From CCRMA Wiki
Revision as of 14:04, 14 November 2009 by Gankit (Talk | contribs) (Created page with '==Idea/Premise== Everyone in this world (or mostly everyone :P) has a sense of rhythm in the music they listen to. This is the rhythm they dance to; this is the rhythm they sing…')

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Idea/Premise

Everyone in this world (or mostly everyone :P) has a sense of rhythm in the music they listen to. This is the rhythm they dance to; this is the rhythm they sing out while humming the song. What if we could allow users to be able to vocally express some rhythm and convert it into real beats used in songs. This would allow a novice user with little music knowledge to record the beats they are humming. Moreover, this would allow users to add beats to their favorite songs getting them closer to a remix artist than ever before.

Motivation

People have made a lot of fun of us (on multiple occasions) while we were completely immersed in singing out loud with our favorite song that came on the radio. Focussing our frustration into a Music 256a project, we realized that what we cared about most (apart from getting our dignity back) was the actual sense of the rhythm that vibed with us, and could not be really translated into standard musical notes/beat patterns, given our poor knowledge of music (and of our friends!).

The Thing

The product is a software that allows a user to record a piece of "sung-out" beats, converts them into "real" beats and adds them to a song of their choice. Design

From a user perspective, the interface will allow a user to choose a song to play, record their vocal-beats till they want to stop. Then convert it into real beats and add it to the song.

Testing

This product can be tested by picking a song with known beats and recording people exactly replicating the same beat pattern in their own voice. Then, we can measure the similarity between the generated beat and the original beat

Team

Team of Two

1. Rohan Jain (rohanj@stanford.edu)

2. Ankit Gupta (gankit@stanford.edu)

Milestones

Nov, 9: Detect general beat pattern and tempo from the user sample

Nov 16: Map different styles of notes in the user sample to different percussion instruments

Nov 21: Figure out how the choose a "start" and an "end" for a vocal beat.

Nov 30: Make UI/Visualization

Dec 4: Polish UI and general project.

Dec 7: * Automatically correct user generated beat pattern to remove minor defects (such as off-beat/noise).


  • - Ambitious Milestone. Might be a stretch!