Difference between revisions of "Ravi"

From CCRMA Wiki
Jump to: navigation, search
Line 15: Line 15:
 
== Software Architecture ==
 
== Software Architecture ==
  
Fundamentally, this is an extension of assignment 2. We will put a GUI on top of this basic multi-track midi DAW. Then, we will be able to implement a vocoder track-type and a harmonizer/pitch correction track-type. These tracks both take two inputs simultaneously. We will remove
+
Fundamentally, this begins as an extension of assignment 2. Then, we will be able to implement a vocoder track-type and a harmonizer/pitch correction track-type. These types of tracks both take two inputs simultaneously (MIDI and audio). We will remove the multi-track functionality, since this is an instrument and not a DAW. Then, we will put a basic GUI on top of this, which will be for real-time parameter control.
 +
 
 +
We will then have an extensible architecture on top of which we can place any sort of voice/midi track. Essentially, the ideal case scenario is to have a generic "track" type and subclass it with various track types such as "harmonizer" or "vocoder." Depending on what track type is selected, a different GUI should show and allow the user to adjust parameters. We will experiment with different pitch detection, pitch correction, and vocoding algorithms to allow for maximum flexibility and creative control over audio output.

Revision as of 23:54, 8 November 2009

Brief

This is the wiki page for Ravi Parikh and Keegan Poppen's Music 256A final project, Fall 09-10.


Introduction

We wish to extend assignment 2 in order to create a MIDI-controlled vocoder/harmonizer/pitch correction software. The user will be able to play MIDI notes and sing into a mic simultaneously, and the output will be audio that is either pitch corrected or vocoded to the MIDI notes being played, depending on the mode. There will be a GUI to control parameters.

Motivation

Neither of us are very good singers, and in raw form, our voices are one instrument that we can't use in compositions. Software already exists that vocodes and auto-tunes voices, but we want to have a greater understanding of how this software works at the lowest level. This way, we'll have as much control as possible on how our voices can be processed. Our goal is not to create an Antares clone; rather, we want to cultivate our own sound and use this in future musical creations.


Software Architecture

Fundamentally, this begins as an extension of assignment 2. Then, we will be able to implement a vocoder track-type and a harmonizer/pitch correction track-type. These types of tracks both take two inputs simultaneously (MIDI and audio). We will remove the multi-track functionality, since this is an instrument and not a DAW. Then, we will put a basic GUI on top of this, which will be for real-time parameter control.

We will then have an extensible architecture on top of which we can place any sort of voice/midi track. Essentially, the ideal case scenario is to have a generic "track" type and subclass it with various track types such as "harmonizer" or "vocoder." Depending on what track type is selected, a different GUI should show and allow the user to adjust parameters. We will experiment with different pitch detection, pitch correction, and vocoding algorithms to allow for maximum flexibility and creative control over audio output.