CCRMA
next up previous contents
Next: Physical Modeling of Musical Sound Sources Up: Research Activities Previous: Computer Music Software

Controllers and Musical Instruments


Subsections

Electromagnetically Prepared Piano

Steven Backer, Edgar Berdahl, Julius O. Smith III

The Electromagnetically Prepared Piano can be regarded as a middle ground between the traditional preparation of the acoustic piano and a fully electronic synthesizer. By positioning a rack of transducers above the piano's strings--but never in physical contact with the strings--we can create vibrations in the piano's many strings using electromagnetic waves. The transducers, a combination of electromagnets and permanent magnets, connect to the soundcard output of a personal computer, where audio output signals can be specified through any arbitrary software interface. The net effect, captivating as both a sound and an idea, is the ability to play the piano without felt hammers, plectrum, fingers, or any other traditional method of physical excitation.

To aid us in the design process, and to help make the prepared piano's sound more accessible, we have also developed a physical model of the system using the Synthesis ToolKit in C++ (STK). It consists of a digital waveguide that simulates the vibration of the strings when excited by the transducers. Finally, to ensure that our model is accurate, we have made swept-sine measurements of the physical system and used them in the model calibration process.

For more information, please see http://ccrma.stanford.edu/~sbacker/empp/.

Guidophone: a Handheld Virtual Music Instrument Combining Vocal Tract Geometry and Hand-Gestures

Rodrigo Segnini, Ryan Cassidy, and Yi-Wen Liu

Vocal sounds provide an intuitive and appealing basis for virtual instruments. The former aspect allows most people to engage in a musical activity simply by repetition of a heard sound. Physical models in speech synthesis attempt, among other objectives, to approximate the geometry of the vocal tract required to produce specific sounds. Cook has developed one such model [Perry Cook, Identification of Control Parameters in an Articulatory Vocal Tract Model with Applications to the Synthesis of Singing, CCRMA Ph.D. dissertation, Electrical Engineering Dept., Stanford University, 1990], where the vocal tract is divided into tube sections that govern the transmission and reflection of acoustic energy at the junctions between sections. Tube section radii provide the model parameters. This paper explores the feasibility of using hand gestures to control those parameters. To that end, the problem of mapping hand gestures to the parameters of Cook's vocal synthesis model, with attention paid to transitions between successive phonemes, is explored. The physical basis of the model is reviewed. Next the relationship between the geometries implied by Cook's tract model and those determined by experiment is investigated. A means for mapping a number of parameters less than the degrees of freedom inherent in the model is presented. Finally, the details of a virtual instrument, controlled by a mechanical device recently developed for the work, are presented.

For a video demonstration, please see
http://ccrma.stanford.edu/~rsegnini/conferences/asa05/guidophone.avi.

(The authors thank Woon-Seung Yeo for his help in making the video.)

Designing Controllers: The evolution of our Computer-Human Interaction Technology Course

Bill Verplank

Over the last several years, with support of the CS department, we have developed a series of lectures, labs and project assignments aimed at introducing enough technology so that students from a mix of disciplines can design and build innovative interface devices. We have come to focus less on theory and more on practical skills leading to a four-week project: designing and building a working controller.

Reference:



© Copyright 2005 CCRMA, Stanford University. All rights reserved.