CCRMA
next up previous contents
Next: Psychoacoustics and Cognitive Psychology (Past) Up: Past Research Activities Previous: Physical Modeling and Digital Signal Processing (Past)

Controllers for Computers and Musical Instruments (Past)




The Touchback Keyboard (February 1999)

Brent Gillespie

For well over a decade, Max Mathews, John Chowning, George Barth, and others at CCRMA have envisioned a synthesizer keyboard with programmable touch-response--a keyboard with the feel of a grand piano, or, at the push of a button, a harpsichord, a piano-forte or perhaps an altogether new keyboard instrument. Such a keyboard would mitigate the deficiencies in touch-response of present-day synthesizer keyboards. But further, a keyboard with programmable feel could be used to investigate the role of touch-response in the relationship between instrument and musician. For according to the experience of musicians, an instrument's touch-response has a great deal to do with its potential as a musically expressive device. Thus the Touchback Keyboard project has been launched at CCRMA: primarily to provide a means to explore the role of the feel of an instrument in musical expression.

A motorized keyboard of seven keys has been designed and built. It features a central bearing-mount shaft with staggered capstans and off-the-shelf low inertia brushed DC motors. In its unpowered state, each key has the approximate inertia of a standard wooden key. When powered, each key may be made to take on the mechanical impedance characteristic of a key interacting with a virtual whippen, jack, hammer, and damper. Creation of the appropriate mechanical impedance is accomplished through real-time simulation of a dynamical model of the piano action. Particularly salient to the feel at the key are changes in kinematic constraint or changing contact conditions among the wooden and felt parts of the piano action. A full software environment for real-time simulation of systems with changing kinematic constraints has been developed.

The creation of virtual objects which may be touched and manipulated through a motorized device is the central activity of a brand-new field called Haptic Display. The word haptic refers to the tactile and kinesthetic senses, and display highlights the fact that these (essentially robotic) devices are computer interface devices just like a monitor and a loud speaker. Haptic Display draws heavily on the fields of robotics, controls engineering, and psychophysics. It turns out that the handling of changing constraints in dynamical models in a sampled-data setting is of prime interest to the haptic display community.

The Touchback Keyboard now resides at Northwestern University, where it is being used to further investigate the role of touch-response in musical expression. The Touchback Keyboard was used as an open research case-problem in a freshman course in engineering design and communications at Northwestern. Cuong Pham, Philip Tang, and David Zaretsky conducted human subject experiments and contributed a report: "Feel the Music". One experiment involved the role of feel in memorizing instrument identities and another studied skill transfer among instruments with different touch-responses. Two constructs of control theory are being used to inspire and organize further experiments: controllability (as a measure of expressive potential) and observability (as a measure of ease of learning).

See http://www-personal.engin.umich.edu/~brentg for more information on current work with the Touchback Keyboard.

Real-time Controllers for Physical Models (February 1999)

Chris Chafe

The computational reductions brought about by new work in algorithms such as the Waveguide filter formulations, along with host-based software synthesis, improvements in DSP chips, and other signal processing hardware, have made possible the real-time synthesis of music by physical modeling. Such instruments require new modes and levels of control. Work in increasing the bandwidth of synthesizer control by exploiting all available degrees of freedom has yielded a number of experimental hybrid controllers (Cook, Chafe). Controllers based on the paradigms of wind and stringed instruments have improved the control of models based on these families, and research is being conducted to create a more general controller which is not constrained to a particular family of instruments.

Mapping physical gestures to a DSP synthesis controller is being studied by experimentation. Early studies in simulation (Chafe, 1985) suggested that linear mappings are not the way to go. The current development system allows trial-by-feel investigation of alternative scalings.

The area of tactile feedback (Chafe) is being investigated, as this is an important area of control for the traditional instrument player. Initial trials have begun using actuators feeding audio to the touch point. A general preference has been shown with the technique. The next stage will be to quantify what enhancement, if any, results from feeling the instrument's vibrations. Also, such considerations as tactile frequency bandwidth and vibrations characteristic of contact points will be studied.

New pieces are being written using real-time controllers and the DSP-based physical models. El Zorro, Push Pull, and Whirlwind are recent compositions by Chris Chafe which employ a Lightning Controller (by Buchla and Associates). The soloist is steering note-generation algorithms in terms of tempo, tessitura and ``riff-type." Gesture and position is tracked with the Lightning's infra-red controllers. Some direct control is exercised over DSP effects via MIDI. A composition project in the works uses the Celletto (an electronic cello) to interact with the DSP synthesis at the control level. The cellist will evoke synthesis related to natural cello behavior directly from the instrument. For example, bow speed might translate into breath pressure control of a wind synthesis.

Ongoing Work in Brass Instrument Synthesizer Controllers (May 1996)

Perry Cook and Dexter Morrill

Brass instrument players have been at a disadvantage when using their instruments as computer music controllers, because they have been limited to commercial pitch extractors which do not measure and use the unique spectral and control features of the brass instrument family. In this project, brass instruments were fitted with several sensors and were used in conjunction with specially designed pitch detection algorithms.

Systems were constructed using various valved brass instruments. Pressure sensors located in the mouthpiece, on the bell, and in mutes are used for pitch detection and pickup of the direct horn sound. Switches and linear potentiometers were mounted near the valves for finger control, and traditional foot pedals are also available for control. Optical sensors were mounted on the valves, providing information about valve position.

The valve and acoustic information are used to form pitch estimates which are both faster and more accurate than those yielded by commercial pitch extractors. The other switches and controls provide control over MIDI synthesizer parameters such as sustain, patch change, and controller changes, as well as controlling signal processor parameters.

The Computer-Extended Ensemble (May 1996)

David Jaffe

Until recently, there have been two basic models of how electronics interact with a performer in a performance situation. One model adds a tape of synthesized sound to an instrumental ensemble. We call this the ``tape music'' model. The other model, ``keyboard electronic music,'' consists of pianists performing on keyboard synthesizers. In the case of the tape music model, the performer is forced to slave to the electronics; with keyboard music, it is the electronics that slave to the performer. We are beginning to realize that these two models are actually end points of a continuum, with the region between them largely unexplored.

The central question for composers is not whether human behavior can be duplicated, but what new musical effect can be achieved with computer interaction that cannot be achieved by prior existing means. A likely place to begin exploring this question is in an area of music in which interaction between performers is central - improvisation.

Introducing a computer as an extension of the improvising performer increases the scope of spontaneous musical decision-making that gives improvisational music its distinctive quality. A computer can magnify, transform, invert, contradict, elaborate, comment, imitate or distort the performer's gestures. It gives the performer added power to control sound at any scale, from the finest level of audio detail to the largest level of formal organization.

But the full power of the computer in an improvisational context does not show itself until we add a second performer to the ensemble. Now each performer can affect the playing of the other. One performer can act as a conductor while the other acts as soloist. Both performers can be performing the same electronic instrument voice at the same time. And these roles can switch at a note-by-note rate. Thus, the walls that normally separate performers in a conventional instrumental ensemble become, instead, permeable membranes. Figuratively speaking, the clarinetist can finger the violin and blow the clarinet while the violinist bows the violin and fingers the clarinet. We have coined the term ``computer-extended ensemble'' for this situation.

The challenge becomes finding roles for the performers that allow them just the right kind of control. They need to feel that they are affecting the music in a significant and clear fashion. Otherwise, they will feel superfluous and irrelevant, as if the music has gotten out of control. The computer program may be simple or complex, as long as it fires the imagination of the performers.

We have been experimenting in this realm with percussionist/composer Andrew Schloss in a duo called Wildlife. The duo features Schloss and the author performing on two modern instruments, the Mathews/Boie Radio Drum and the Zeta electronic/MIDI violin, with this ensemble extended by two computers, a NeXT and a Macintosh running the NeXT Music Kit and Max. The music is a structured improvisation in which all material is generated in response to the performers' actions; there are no pre-recorded sequences or tapes.

Current work includes The Seven Wonders of the World, which, unlike Wildlife, casts the computer and Radio Drum in the context of a conventional ensemble (or, at least, a conventionally notated ensemble). This piece is scored for Radio Drum-controlled Disklavier, harpsichord, harp, two percussionists, mandolin, guitar, harmonium and bass. It was composed at the Banff Centre for the Arts, where I was a Visiting Artist in 1992-93. The Radio Drum part was worked out in collaboration with Andrew Schloss, supported by a Collaborative Composer Fellowship from the N.E.A.

Other projects include the following:

  1. Terra Non Firma, a work for conducted electronic orchestra and four cellos, using the Mathews Conductor program. This work was commissioned by the University of Victoria in honor of Max Mathews.

  2. American Miniatures, a recently completed work for tape alone, uses Common Music, the Music Kit and the phase vocoder to process recorded sounds of strings, voices and drums.

A New Structure for the Radio-Baton Program (January 1998)

Max V. Mathews

A new structure has been developed for the Radio-Baton program. The new structure unifies the Conductor Program mode of operation and the Improv mode (formerly called the Jaffe-Schloss mode) of operation. With the new structure, the Radio-Baton acts as a pure controller sending triggers and baton position information to the computer that is speaking to it, no matter what kind of computer that may be. The Conductor Program or any Improv program is entirely in the computer, so neither the Radio-Baton nor the MIDI connections need be changed going from one program to another. Also, any computer that speaks MIDI-PC's, Mac's, or Unix platforms-can be used. The new structure has the following advantages:

  1. All communications going in both directions between the baton and computer is via standard midi commands (control changes or any three byte midi commands can be used). No system exclusive messages are needed.

  2. Each midi command is logically complete in itself. Two or more successive commands are never required to be put together in order to logically complete a message.

  3. The baton acts only as a simple controller. All programming for either the Improv mode or the Conductor program mode is done in the computer. Thus programs can be easily revised without burning new eproms.

  4. The baton program has only one "state". No command from the computer can be misinterpreted by the baton because it is in the wrong state.

  5. Midi data going to and from computer and baton goes over completely separate cables from the information going to the synthesizer. Thus there is no possibility of the synthesizer misinterpreting commands meant for the baton or visa-versa. Also, the synthesizer can use all midi channels-no channels need be reserved for the baton.

Position and Trigger Information Sent from Baton to Computer

The baton will send trigger and position information to the computer encoded as key pressure midi commands as follows:

Information MIDI Command (3 Bytes)
trigger from stick 1 and whack strength A0 1 WHACK
trigger from stick 2 and whack strength A0 2 WHACK
trigger from B14+ button A0 3 0
trigger from B15+ button A0 3 1
down trigger from B14- foot switch A0 3 2
up trigger from B14- foot switch A0 3 3
down trigger from B15- foot switch A0 3 4
up trigger from B15- foot switch A0 3 5
pot 1 current value B0 4 POT1
pot 2 current value A0 5 POT2
pot 3 current value A0 6 POT3
pot 4 current value A0 7 POT4
stick 1 x current position A0 8 X
stick 1 y current position A0 9 Y
stick 1 z current position A0 10 Z
stick 2 x current position A0 11 X
stick 2 y current position A0 12 Y
stick 2 z current position A0 13 Z

In the default setting pot position information is sent 10 times per second and stick information is sent 50 times per second, the total data rate for position information will be 1020 bytes per second or about 1/3 of the midi channel capacity. This can be reduced if desired.

The data rate for trigger bytes will be much smaller than the data rate for position bytes. However, timing is more important for triggers, so the trigger information will be given priority by the baton.

General Structure of Computer Program

The general structure of the computer program will simply be 1. to use the position information to update and keep current a set of memory locations showing the current values of stick and pot positions; and 2. to execute appropriate functions when it receives triggers. The program must also have a good clock (a millisecond clock) so it can measure the times triggers occur and can schedule events to happen in the future.

Commands from Computer-to-Baton

The computer will need to send the following commands to the baton:

  Command from Response from
Function Computer-to-Baton Baton-to-Computer
test-baton and midi ok A0 14 0 A0 14 0
turn on position reporting A0 14 1 1020 bytes/sec data
turn off position reporting AO 14 2 none
set stick levels A0 14 3 none
set center stick 1 A0 14 4 none
set center stick 2 A0 14 5 none
increase z sensitivity A0 14 6 none
decrease z sensitivity A0 14 7 none
increase x-y sensitivity A0 14 8 none
decrease x-y sensitivity A0 14 9 none
increase position int 5 ms A0 14 10 none
report value in buf[j] A0 15 j 12 bytes encoding value in buf[j]

Originally, I put the Conductor Program in the Radio-Baton because I doubted that the computers of that era would be fast enough to play a complex score in real-time. Such doubts are no longer appropriate. Tests with the new Baton showed that even a 486 processor running at 30MHz could handle the most complex score in the present Conductor Program repertoire - Beethoven's 5th symphony.

The Radio Baton Progress Report (May 1996)

Max V. Mathews

MIDI Hardware

During the last year, a new design has been completed which involves a flexible antenna, and a separate electronics box with a LCD display.

Previous hardware had the receiving antennas located under the upper surface of a large "pizza" box which also contained the electronics. Ergonomic considerations required that this box be about 2 feet square-a size large enough to be inconvenient to pack and transport and expensive to manufacture. In the new hardware the electronics are in a much smaller box, about 10 inches square and the antenna is connected to the electronics via a unpluggable cable. In addition, the antenna is flexible and can be rolled up for storage and shipping.

The breakthrough which made a flexible antenna possible was the use of aluminized mylar for the electrodes. This material is inexpensive, available, and tough. The electrodes are fabricated from a single sheet which comes completely coated with aluminum. Insulating channels are cut through the aluminum with a hand grinder. As we have already said, baton tracking is done in a very simple and robust way. A transmitter antenna electrode is located in the end of each baton. The strengths of the signals received at the five electrodes are compared.

Wires are connected to the electrodes by first attaching to the aluminum layer, small pieces of conductive-adhesive backed copper tape manufactured by the 3 M company. Wires can then be soldered to the copper. The conductive adhesive backed copper tape has proven to be a very useful material for many purposes.

In the finished receiver, two sheets of mylar, one for the electrodes, the other for a ground plane shield are between two sheets of vinyl covered fabric and the layers are sewed together around the outside edges with a sewing machine. This fabrication is easy and can be done by any clothing manufacturer or sailmaker.

The electronics box contains the 80C186 embedded processor which is the main computer in the baton. The knobs and push buttons which were part of the original baton design are also on the electronics box. A LCD display has been added to provide feedback to the performer for various purposes.

Type 1 MIDI Files

Communication with the Radio-Baton is done entirely with MIDI characters. In the conductor program mode, the score of the piece to be played can be loaded into the Radio-Baton from a control-computer encoded as a sequence of MIDI characters.

Last year I described a version of the program that could play MIDI files. This ability has become very important. Midi files can be prepared by many commercial sequencer programs and the files appear likely to become the "lingua franca" of music representation. Both type 0 and type 1 MIDI files exist. Last year's conductor program could play only type 0. In a type 0 file, all the events in all the channels are sorted into proper time sequence before being recorded on the file. Thus a type 0 file is simple to play with device like the Radio-Baton. Unfortunately not many sequencer programs will write type 0 files, and those that do appear to have serious bugs.

Consequently I revised the conductor-program so it can play type 1 files. The revision is non-trivial. Type 1 MIDI files have a plurality of tracks. In each track, events are sorted into their proper time order, but in order to play the complete piece, all tracks must be melded together with all events in one single proper time order sequence. A sorting program to do this task was written. The resulting program tried on a number of commercial MIDI files seems unproblematic.

Controller Mode

The Jaffe-Schloss program allows the Radio-Baton to be used as a simple controller. In this mode, the baton sends triggers and information about the motions of the batons to a control computer. The musical interpretation of the baton signals is done in the control computer which then sends MIDI commands to a synthesizer to play the music.

The trigger and position information must be encoded as standard MIDI commands because all communication with the baton is via MIDI. The original Jaffe-Schloss protocols were somewhat prodigal in their use of MIDI commands and channels. The Jaffe-Schloss protocols have been revised. All communication is done by aftertouch and control change commands sent on MIDI channel 16. This choice was made as one least likely to preempt normal MIDI needs for these commands.

Environment for Compositional Algorithms in the C Language

The Jaffe-Schloss mode was originally made to interface with the MAX language running in a Macintosh computer. During the winter quarter, Chris Chafe and I gave a course focused on writing compositional algorithms in the C language. To simplify the student's task, we provided a pattern program in which the students could simply add code to a set of null functions to create their own algorithms.

We also provided utility functions which they could call and a clock register which is automatically incremented to give time in milliseconds.

Sea Songs

Dexter Morrill wrote a set of three songs which were performed by Maureen Chowning who sang and played the Radio-Baton in a new way. The baton was used not as a synthesizer controller but rather to control an effects processor which processed Maureen's voice. Typical effects were reverberation, pitch shifting, and harmonization. Program changes to change the type of effect and control changes to change a parameter in an effect (for example, reverberation time) were mapped onto movements of the batons.

Conclusions

The pattern.c environment seems congenial for writing compositional algorithms in which the music is controlled by a combination of computer programs, random numbers generated in the computer, and motions of the batons. The C language is a very general way to write such programs. The students were able to master the parts of C that they needed much more quickly than we had expected.

The compositional algorithm environment contrasts strongly with the Conductor Program environment. Each environment has advantages and limitations. I hope the next step will be a system in which Conductor Program scores can be easily combined with compositional algorithms.


next up previous contents
Next: Psychoacoustics and Cognitive Psychology (Past) Up: Past Research Activities Previous: Physical Modeling and Digital Signal Processing (Past)
CCRMA CCRMA Overview
©2000 CCRMA, Stanford University. All Rights Reserved.