Please join us for a showcase of the work being done by CCRMA students.
11:00am - 12:30pm Sook Young Won Ph.D Defense Presentation "Investigating the Spectral Characteristics of One's Own Hearing"
1:30pm - 2:30pm Tour of CCRMA and demos
2:30pm - 3:30pm Repeat tour of CCRMA and demos
4:00pm - 5:00pm Performances and Lectures in the CCRMA Stage5:00 Reception and music in the Knoll Backyard
CCRMA is located in the Knoll on Stanford's campus. The address is 660 Lomita Ct. Stanford, CA 94305. Parking is available in the Tresidder Lots at the bottom of the hill.
http://maps.google.com/maps?f=q&hl=en&geocode=&q=660+lomita+ct,+stanford,+ca&sll=37.0625,-95.677068&sspn=36.094886,59.326172&ie=UTF8&ll=37.421163,-122.173204&spn=0.008845,0.014484&z=16&iwloc=addr
Sook Young Wong: Investigating the Spectral Characteristics of One's Own Hearing
Hearing a recording of one's own voice typically results in a sense of
surprise and disappointment . The perception is often described as 'thinner'
or 'drier' than expected. The perceptual disparity between the live and
recorded sound of one's own voice is the result of the hearing system
receiving sound over multiple paths combining bone and air as opposed to the
recorded sound over a single air-conducted path. In this study, I aim to
investigate the spectral characteristics of one's own hearing as compared to
an air-conducted recording. To accomplish this objective, I designed and
conducted a series of perceptual experiments with a variety of filtering
applications.
Craig Hanson and Mike Gao: The LUMI: Tools and Techniques for Electronic Music
The LUMI is a new hardware/software performance interface optimized for live and studio electronic music
production. It provides the performer with multi-dimensional control via a 10.4 inch touchscreen, 32 pressure
sensitive buttons, infrared sensor, 8 knobs and cross-fader. This discussion overviews the use of the LUMI
and its custom software alongside traditional digital audio workstations. It will also be demonstrated as a
control surface for Ableton Live for live electronic music production.
Kapil Krishnamurthy: Digital Model of the Dunlop 535Q Crybaby Wah-Wah Pedal
The Dunlop 535q CryBaby is probably one of the most tweakable wah pedals available in the market today. Over
the famous Dunlop GCB-95, it has several additional controls that allow the sound to be shaped in different
ways. Some of the features that it has are
- Independent control for wah range
- Variable Q to vary the tonal characteristics of the resonant filter. 535q gives a much larger range than
the traditional 95q.
- Additional volume boost and on board compression for a more powerful waaahhhh.
This lecture on modeling the Dunlop 535Q pedal goes over the process of taking impulse response measurements of an analog pedal, fitting filter models to the measured responses and finally taking the finished model from a prototyping tool like MATLAB to a real-time audio platform. In this case the implementation was done in C++ and compiled into a VST plugin.
Pedro Kroger: Rameau, a system for symbolic harmonic analysis
Rameau is a system for automatic harmonic analysis and computational musicology with algorithms for chord name
and roman numeral analysis. It also has commands to find consecutive octaves and fifths, list the chords types
used in a music, find voice crossings, vocal range, and melodic jumps, analyze how the seventh of chords are
resolved, list how many chord progressions are strong, weak, superstrong and neutral, and analyze the final
cadence of compositions.
Fernando Lopez-Lezcano: CatMaster and Fractal Cat, a program and its piece
In this talk I will describe the genesis and evolution of a series of
improvisatory live pieces for keyboard controller and computer that
include sound generation and processing, event processing and algorithm
control of low and high level structures of the performance. The pieces
are based on live and sampled piano sounds, further processed with
granular and spectral techniques and merged with simple additive
synthesis. Spatialization is performed using Ambisonics encoding and
decoding.
Gautham J. Mysore: Lead Instrument Extraction from from Single Channel Recordings
An algorithm to extract a lead instrument from a single channel recording is
presented. A linear algebraic methodology has been used in a probabilistic
framework. A spectrogram factorization method using the EM algorithm will
first be discussed. This method will then be used to model both the lead
instrument and background music separately in a recording. With the use of the
separate models, the lead instrument and background music can be reconstructed
separately. Examples of the extraction of vocals and lead guitar will be
presented. Finally, post processing methods to improve the results will be
discussed.
Juhan Nam: Efficient antialiasing oscillators using polynomials
One of the challenges in virtual analog synthesis is avoiding aliasing when generating
classic waveforms (sawtooth, square, and triangular) which have theoretically infinite bandwidth in
their ideal forms. This talk presents several efficient algorithms to reduce the aliasing using polynomials;
BLIT and BLEP method using polynomial interpolators, differentiated polynomials waveforms (DPW). They were
evaluated by comparing the threshold of hearing and masking curve of oscillators with their aliasing levels.
The result shows that the suggested methods are perceptually free of aliasing within practically used
fundamental frequencies, being computationally efficient.
Steinunn Arnardottir: The Echoplex
The Echoplex is a tape delay unit featuring fixed playback and erase heads, a movable record head, and a tape loop moving at roughly 8 ips. The relatively slow tape speed
allows large frequency shifts, including "sonic booms" and shifting of the tape bias signal into the audio
band. In this model, the Echoplex tape delay is modeled with read, write and erase pointers moving along a circular
buffer. The model separately generates the quasiperiodic capstan and pinch wheel components and drift of the
observed fluctuating time delay. This delay drives an interpolated write simulating the record head. To
prevent aliasing in the presence of a changing record head position, an anti-aliasing filter with a variable
cutoff frequency is used.
Ed Berdahl: Controlling Electromechanical Bowing Of Guitar Strings
An electric guitar string can be bowed electromechanically by sensing the string velocity,
distorting the velocity measurement, and feeding the distorted signal back into the string
as a force. We demonstrate a feedforward interface for controlling such a device in a performance
context. The instrument is an augmented Fender Stratocaster electric guitar.
Ed Berdahl: Making music with the Falcon haptic device
We demonstrate how to synthesize sound using a 3DOF haptic device, which provides force
feedback. The force feedback is controlled directly in the Pure Data (Pd) graphical programming
environment using the standard GUI and message processing objects. The system allows the musician
to leverage his or her touch-based senses to explore and manipulate a virtual environment. Sound
synthesis is directly coupled to the way in which the musician manipulates the virtual environment.
Jonathan Berger, Yao Yang and Diana Siwiak: Catch Your Breath
Catch Your Breath is an interactive audiovisual bio-feedback system adapted from a project designed to reduce
respiratory irregularity in patients undergoing 4D CT scans for oncological diagnosis. The system is currently
implemented and assessed as a potential means to reduce motion-induced distortion in CT images.
The motion of the subject's breathing is tracked via webcam through the use of fiduciary markers, and interpreted as a real-time variable tempo adjustment. During the training session, the user can practice interacting with the system, and also set his/her average breathing rate. During the mastery session, the subject can then adjust his/her breathing to synchronize with a separate accompaniment line. When the breathing is regular and is at the desired tempo, the audible result sounds synchronous and harmonious. The accompaniment's tempo progresses and gradually decreases which causes the breathing to synchronize and slow down, thus increasing relaxation.
Michael Berger: GRIP MAESTRO mach 2
The successor to the GRIP MAESTRO mach 1 (2008), the mach 2 improves upon the original design by addressing
several important concerns. Both instruments operate in essentially the same way: they are hand-exercisers
that have been outfitted with sensors in order detect the performer's interactions with the device. But the
mach 2 requires less hand strength to be played, and utilizes a great deal more performative information than
its predecessor through the addition of an accelerometer to detect hand/arm motion. Additionally the mach 2 can
be played as a two-handed instrument (one device in each hand), allowing for larger and more complex
performative actions.
Michael Berger, Diana Siwiak, C. Keiko Funahashi: Vernal Canopy
An interactive art installation.
Visda Goudarzi: Gestonic
Gestonic is a video-based interface for the sonification of hand gestures for real-time timbre
control. The central role of hand gestures in social and musical interaction (such as conducting) was
the original motivation for this project. Gestonic is being used to make computer-based instruments
more interactive. It also allows the musicians to create sonorous and visual compositions in real
time. Gestonic explores models for sonification of musical expression. It does not use the direct-
mapping of gesture-to-sound such as is commonly applied in acoustic instruments. Instead, it
employs an indirect mapping strategy by making use of the color and timbre of the sound. The
system consists of a laptop's camera, the filtering of camera input via the open source software
known as Processing, the gesture recognition with Neural Network, the sending of OSC control
messages to the audio-processing program known as ChucK, and finally the parameter-mapping
and sound synthesis enabled by ChucK.
Andrew Greenwood: Ron Jeremy's Home Made Acoustical Characterization System
Theodore Roosevelt's was well served by his adage "speak softly
and carry a big stick." Today, this ideology has been revisited with the
advent of Ron Jeremy's Home Made Acoustical Characterization System
or "RJHMACS." Developed using Matlab's GUIDE, this simple interface was
designed to save acousticians time on acoustical profiling. Various forms of
Ron Jeremy sanctioned* acoustical characterization are available including
swept sine tones, Golay codes and calibrated allpass smears.
*test signals may not actually be sanctioned by Ron Jeremy
Jorge Herrera: MOLS: Multiperformer OnLine Synthesizer
The MOLS is a novel musical instrument that explores the usage of a web browser as a tool for collaborative music
performance on-line. All the sounds are synthesized by an Actionscript application and the instrument can be
simultaneously controlled in real-time by two performers, each one
using a separate browser. Therefore, remote creation and performance of music is now possible without installing
any other application beside a web-browser with a Flash plug-in (which is already installed in the vast majority
of personal computers).
Miriam Kolar: Chavín de Huántar Archaeological Acoustics Project
The Chavín de Huántar Archaeological Acoustics Project seeks to explore the acoustics and instruments
of Chavín de Huántar, a 3000-year old pre-Inca ritual center in the north-central sierra of Peru.
The site complex includes an extensive underground network of labyrinthine corridors, shafts, and drains
built of stone block, intact and primarily without post-period modification since the end of monumental
construction around 600 B.C. The project has several aims: to measure, analyze, archive, and model the
acoustics of Chavín, culminating in simulations for public interface and archaeological research tools.
For more information, visit: http://ccrma.stanford.edu/groups/chavin/.
Gautham J. Mysore: Relative Pitch Estimation of Multiple Instruments
An algorithm to concurrently estimate the pitch of multiple instruments in polyphonic music is presented.
The music is probabilistically modeled as a mixture of constant-Q transforms. A multilayered positive
deconvolution is performed on the mixture constant-Q transform to concurrently estimate the relative pitch
track and timbral signature for each instrument. A Kalman filter type smoothing has been used to enforce
temporal continuity of the pitch tracks.
Jason Sadural: OpenMixer
"OpenMixer" is a collaborative mixer-less multi-channel audio system for playback, mixing,
and spatialization that exists in the CCRMA listening room. "GIA" (short for Gloabl Internet
Audio) is an intuitive modular mixing environment for sharing sonic scenes, acoustic properties,
and social events across the global internet network in real-time. With GIA, a user will have audio
input/output capabilities with multiple environments and be able to "mix" audio in complex configurations.
GIA is a feature on OpenMixer.
Javier Sanchez: Visual and music Interaction
Intermedia demos in which the performer can relate sound with spatial representations. Piano and pen display are
used as MIDI inputs. Counterlines is a duet for Disklavier and Wacom Cintiq, in which the pianist generates graphic lines by
playing music and the graphic performer generates musical lines by drawing graphic ones. Both relate to each
other contrapuntally.
Dan Schlessinger: The Kalichord: An Electro-Acoustic Physically Inspired Folk Instrument
The Kalichord is a two-handed electro-acoustic instrument which acts as a controller for a physical string
model. The user plucks virtual strings in the form of small tines with one hand, while playing bass lines
on buttons with the other hand. The string and body of the virtual instrument are physically modeled while
the pluck signal is measured acoustically from the plucking of the tines and fed into the model as an excitation
signal, rendering a realistic and expressive plucking experience. Further expressive control is offered by
rotating the hands in relation to one another.
Hwan Shim: Stereo Music Source Separation for 3D Upmixing
A new 3D upmixing based on stereo source separation described. Our approach covers primary-ambient
decomposition using principal component analysis(PCA), newly introducing source separation methods and
reproduction with positioning all separated components. Since stereo music separation is not very constrained
as all sources upmixed, fewer artifacts were desirable than better separation. We introduce energy vector
criterion for keeping up every energy consistently between original mix and upmix. Following this criterion,
proposing algorithms are introduced which are using two detected sources which consist of a source with
maximum likelihood estimate and a supporting source. The upmixed sound from these algorithms are improved
evidently.
Kyle Spratt, Jonathan Abel: All Natural Room Enhancement
A recording technique due to Walter Murch for extending the reverberation time of a room is analyzed,
and a realtime implementation is presented. The technique involves speeding up a prerecorded dry sound and
playing it into a room. The room response is recorded and subsequently slowed down such that the original signal
appears at its normal speed, while the reverberation of the room is 'stretched,' causing the room to sound
larger than it is. A signal analysis is presented showing that this process is equivalent to slowing down the
impulse response of the room. Measurements on a simple physical system confirm this effect, and show that the
process can be interpreted as either scaling the room dimensions, or slowing the sound speed. Finally, we
describe a block processing approach which implements this technique in real time with a fixed processing
latency.
Michael Zeligs: Algorithms for Instant Creativity
A preview of the interactive sound art "The Self Educating Space" to be installed at Synergy House,
Experimental Room May 29th-June 4th. We examine the relationships of sound, live looping, and interaction
design.