Professor: Chris Chafe (cc@ccrma.stanford.edu)
TA: Sook Young Won (sywon@ccrma.stanford.edu)
Office hours by appointment
Class meetings: Tuesday / Thursday(not every) 10:00-11:50am [Listening Room @ the Knoll]
Final Presentation : June 6th 2006
This project is making a performance(Piano Duo of Prof. Elain Chew and MA/MST ChangHyun Kim, I hope) between USC Mucaoco and STANFORD CCRMA using Audio and Video real time playing program.
I would like to develop a falsetto detection algorithm using Gaussian Mixture Models. This detection algorithm will later be used in intelligent audio effects for live vocal processing. These effects will process the input signal only when the vocalist is singing in falsetto. They will therefore rely on an accurate falsetto detection algorithm in order to activate the signal processing engine.
This project presents spectral analysis of the sound, used for the guitar chord recognition. Each chord, played by the guitar, is represented by the unique set of spectral components. Therefore can be developed spectral analysis algorithm for the identification of the played chords.
I plan to complete my pipa synthesis model using Karplus Strong algorithm and physically-derived excitation/body filters, then use this model as the instrument for my computer-generated compositions.
Jump is an interactive graphics generator and music instrument. The user places different shapes in a 3-D box and determines the sounds each object represents. Then, the user shoots a light beam into the box and watches it bounce around, hitting the objects and making sounds. The sound created depends on the shape type and position in space. The program is based on ray-tracing algorithms for graphics and how the beam bounces geometrically. Jump uses Microsoft Visual Studio C++.
Either develop a synthesis method based on various levels of integrations of white noise (a little bit like brown noise, but with more varied tone qualities), do chuck-inspired realtime composing with snd, integrate snd into ardour, explore the possibilities of adding a garbage collector to the realtime extension in snd, make a robust and clear type system for the realtime extension in snd, make a realtime version of the uae amiga emulator, or make a gtk2 version of mammut.
This is a mission proposed by Digidesign which is about investigation into using adaptive filters to correctly predict samples in a stream that has gaps in it. To this end, synthesis based approach such as sinusoidal modeling, Linear Predictive Coding will be tried.
My project is to capture the instrument's tilt signal as a solo player's gesture using several photosensors instead of unstable accelerometer or tilt sensor which are based on the mechnical responses.
The idea behind this project deals with creating an interactive interface that can be used to easiily control audio and video functions. The initial concept involves using real-time video tracking to pick up certain signals provided by an individual; these signals in turn would trigger some sort of function in a video or audio track that is being outputted. Once this is accomplished, the goal is to explore the various additional control and function possibilities of the project.
Dance/techno/classical/poetry album project for three formats: 8 channel surround, 5.1 surround, and stereo. The goal is to have mixes that translate well in all three formats, to fit the widest possible audience while still pushing the envelope in the genre
The Original Omnichord, made by the suzuki corp, is an electronic toy version of the autoharp. It is played by choosing a chord type (maj, min, or dim) and tonic on a matrix of buttons with the left hand and "plucking" the note on a pad with the right hand. The instrument developed a cult following in recording studios and with the use of typical guitar effects and studio trickery found its way onto many a record. The plug-in will be developed in Max and playable using just a computer keyboard and trackpad. It will have a simple built in sound which is a model of the Ominichord's harp-like tone, in addition it will put out MIDI and OSC for use with other sound generators or samplers. As an extension, I hope to compose a piece with this instrument for ambisonic rendering.
During this project I will use an existing video-game engine (Quake III by ID Software) as a platform for the development and presentation of an interactive musical composition. Using a library developed to stream certain information from the Quake III game platform to PD, I will work to create an interactive compositional environment where a user's virtual motion through a virtual space creates a musical work.
This project/installation/composition was conceived for the Agora-Resonances festival at IRCAM. It uses ten hanging snare drums that are excited by a piezo disc and a modified speaker cone controlled by Max/MSP (surrounded by plants!). Issues of perception, approximation, interpretation, space and duration are explored.
For this project I will investigate the process of filling in the gaps of an audio signal. I will aim to focus on implementations which will work predominantly on non periodic signals. Methods of investigation will include adaptiv filtering, time stretching, and reverb.
Because beatboxing is a relatively new phenomenon, the art of recording and processing beatboxing has not been sufficiently explored. The goal of this project is to create a program that processes a beatboxing sample in realtime, resulting in a full, crisp, and enjoyable sound. The plan is to create an automatic detection system to route each type of percussive noise (kick, high hat, and snare) to a different channel which will allow for maximum control, including individual track EQs and reverbs
My project will invovle the development of an ambisonic "instrument" comprised of a hardware interface (a pair of Korg Kaoss Pads which are touch sensitive screens capable of outputing MIDI and which track data in two dimensions so two will yield three dimensions and an additional opportunity to control some effect process as well) and software which I will be writing, primarily in CLM, which should enable me to take an input sample and dynamically make it move in 3D space in pretty low latency time
It's an extension of my previous work, which is using adaptive system to train the first period (one period = 1/fundamental frequency) of any sound to be that sound. In my previous work, the adaptive system utilize the updated weights at every setup (like a sliding FIR, or two point convolution), and the result is perfectly optimal to the original sound in the least square sense. The new system will use the converged weights for every selected length (N multiple periods)and implement it as a series of FIR filters to generate the sound. For strings, hopefully, it might work.
I am first planning on modeling the basic processes used by singers in their performances, and subsequently construct an interface that allows the user to manipulate the different parameters needed for proper singing. Once I have determined the basic parameters, I will try to construct a proper interface to manipulate them.
This course is an opportunity for students who have completed Music 220a and Music 220b to pursue an independent research project in computer music. Students regularly present their research and project progress in a weekly seminar-style class meeting. In addition, projects in progress are documented on the web.