Interactive Electronics in Computer Music
and Chris Chafe's "Push-Pull" composition

 =

by
©
June 4, 1999

Table of Contents:


Introduction

"...the recent search for real-time, interactive systems and 'intelligent' software is an effort to humanize technology, to obscure the boundary between humans and machines" (Schwartz, 1993).

As a Music, Science, and Technology (MST) graduate from Stanford University, I have been very influenced and inspired by the computer art music and music technology taught and developed at Stanford's Center for Computer Research in Music and Acoustics (CCRMA). CCRMA is one of the pioneering institutions in the field of computer music composition, founded in the mid-1960s under the leadership of John Chowning—whose success later inspired the Paris-based Institut de Recherche et Coordination Acoustique/Musique (IRCAM), the world-renowned computer music center established by Pierre Boulez in 1975. The relatively recent Computer Music @ CCRMA CDs (1997), Volumes 1 and 2, provide a good idea of some of the kinds of music, sounds, and ideas that I am familiar with and motivated by.

As a recording-studio assistant at CCRMA during my senior year, I had received an assignment from my advisor and CCRMA Director, Chris Chafe, to act as sound technician for a performance of his interactive computer music piece, Push-Pull (Chafe, 2001), by local San Francisco-based cellist Robert Sayre. Working with this piece was my first introduction to interactive electronics, also known as live electronics. My interests in free improvisation (see essay) and chance music, coupled with my study of computer music, contribute to my interest in combining these two worlds in using the computer and electronic devices as real-time tools that can act in interesting improvisatory and spontaneous ways themselves.

The term "real-time" means that the computer and electronic devices "collect data, compute with it, and use the results to control a process as it happens" (Webster's dictionary; italics added) rather than carrying out these various steps (data collection, computation, and control) at separate time intervals before a performance. Therefore, the computer can be used to actually sense and react to music that the human performer is playing (or to other things) and is thus given greater flexibility in its output than music that is solely pre-compiled onto tape for playback during a performance.

Before delving into the topic of interactive electronics, however, I will first back up a little and provide a brief history of electronic and computer music in general so as to introduce the terminology and background that are relevant to the field.


History of Electronic/Computer Music

The earliest form of electronic music is musique concréte, which originates from a series of brief études by Pierre Schaeffer (b. 1910), broadcast as a "Concert of Noises" over French radio in Paris in 1948 (Schwartz, 1993). In musique concréte (literally, "concrete music"), which Schaeffer termed himself, "the raw materials consist of recorded musical tones or other natural sounds that are transformed in various ways by mechanical and electronic means and then assembled on tape to be played back" (Grout, 1996). Its techniques are still common to electronic music today, though they are now performed on the computer rather than with the large and tedious playback devices of the earlier era.

The other style/technique of electronic music, which developed shortly after musique concréte, uses various electronic and mechanical means to build sounds from scratch, as opposed to manipulating pre-recorded ones, and thus came to be known as elektronische Musik: "The primary source for these sounds were sine wave generators, or oscillators, and there was much exploration of a process known as additive synthesis, whereby sine waves of different frequencies are combined to generate a pitch with particular overtones, and thus a particular timbre. (A pure sine wave itself, or rather than tone it produces through a loudspeaker, has no overtones.)" (Schwartz, 1993). While musique concréte had its founding in Paris, elektronische Musik had its founding in Germany in Cologne, under the development of composer Herbert Eimert (1897-1972) in 1951. Karlheinz Stockhausen soon became artistic director of the Cologne studio in 1953, however, and he is usually regarded as the first major pioneer of purely electronic music—his first significant work of which is Studio II (1954), which is also the first electronic composition to ever be formally notated (Schwartz, 1993). Many new techniques of synthesizing sounds have developed over the years, and it is an ongoing field of interesting research and activity.

Many composers, of course, soon began to combine the techniques of musique concréte with elektronische Musik—a pioneering example of which is Edgard Varèse's (1883-1965) Poème électronique (1958), a collection of assorted mechanical, vocal, and percussive samples ("samples" meaning pre-recorded sounds) and synthetic bleeps and swooshes. (This piece was performed at the Brussels World Fair, projected by 425 loudspeakers arranged in the interior space of architect Le Corbusier's pavilion and accompanied by moving colored lights and projected images.) It is interesting and revealing in light of this eclectic mixture that Varèse referred to his own work as "organized sound" rather than music; he once stated, "Don't call me a composer. Call me an engineer of rhythms, resonances, and timbres" (Schwartz, 1993). Composers have continued to combine the worlds of "concrete" and synthetic sounds in various fashions unto the present day.

In sum, the new medium of electronic music "encouraged listeners to accept sounds not produced by voices or musical instruments" and also "freed composers from all dependence on performers and empowered them to exercise complete, unmediated control over the sound of their compositions" and to explore ideas impossible to perform (Grout, 1996).

However, the absence of a human performer in tape music ("tape music" being the term used to describe electronic music that has been stored in final form onto recording tape for playback) lacks vitality in that is is exactly the same every time it is listened to and has difficulty justifying its inclusion onto concert programs for this reason unless some elaborate set-up is required that would make the music otherwise impossible to listen to simply in the comfort of one's home (Grout, 1996). Thus, for both aesthetic and practical reasons, the combination of electronic music with live human performance develops strongly in the genre. A pioneering example of this combination is Milton Babbitt's (b. 1916) Philomel (1964) for soprano (voice), tape soprano, and synthesized sound (Palisca, 1996). Another important pioneer of the live-plus-tape situation is Stockhausen, whose Kontakte (1960), for example, combines piano, percussion, and tape music (Schwartz, 1993). The combination between live performer(s) and electronic sound varies immensely in the field. For example, the electronic part can be concréte, synthetic, or both; if concréte, the raw material may be (a.) various recorded samples of the performing instrument(s), (b.) a sample of another musical piece, or (c.) non-instrumental sources; and the relationship between the electronic and human parts may be well-integrated and even imitative, or they may be in some form of contrast, rather, and even in competition with each other.


Interactive Electronics

A more recent development of the combination of human performance with electronic music, which brings us back to the immediate topic of this paper (i.e. interactive electronics), is that the electronic part may itself be "live" rather than pre-recorded tape music. This may be as simple as manipulating some effect that is added to the sound of the instrument(s): as in, for example, Stockhausen's Mantra (1970) for two pianos and electronic ring modulators that transform the timbre of the pianos in notated and controllable ways during the performance. Or the "live" electronic part may be as complicated as automatically generating musical materials based on some real-time analysis, usually of what the human performer is playing, as discussed earlier. Often, interactive electronics are used to combine both scenarios: effects that are added to the sounds of instruments during a performance and music that is self-generated by a computer in reaction to the performer (or to other things: e.g. Charles Dodge's Earth's Magnetic Field (1970) bases pitch selection on fluctuations of magnetism in the Earth's outer atmosphere caused by solar winds) and output either directly to loudspeakers or via computer-controlled instruments (such as the Yamaha Disclavier digital piano, or other such MIDI-controlled physical instruments).

Delving briefly into the history of interactive electronic music, one early example is the digital system known as Daisy, developed in the 1970s by Joel Chadabe to control analog sound processors and spatial location of sounds (i.e. the location of sounds in the sound-field generated by loudspeakers). Chadabe's interest in live performance applications eventually led to his formation of the Intelligent Music Company in the mid-1980s (Schwartz, 1993). In more recent years, the Musical Instrument Digital Interface (MIDI) has become the dominant force in interactive systems, allowing digital and computerized communication with musical instruments. Examples include the use of a MIDI trumpet by Dexter Morrill in his Sketches for Invisible Man (1989) to trigger spontaneous electronic events and Gary Nelson's creation of a MIDI horn for Warps of Time (1987) to likewise trigger spontaneous effects and algorithmically generated music output to various synthesizers (Schwartz, 1993). Also, the San Francisco Tape Music Center was an important pioneering place for interactive electronic music: established in 1959 by Morton Subotnick and Ramon Sender as a decidedly "nonacademic undertaking" (though it was later reconstructed in 1966 at Mills College), the center soon became a symbol of "West Coast unbridled experimentalism, dedicated to improvisation, mixed media, and live or 'real-time' use of electronics" and included such composers as Pauline Oliveros and Terry Riley (Schwartz, 1993).

A lot of the aforementioned approaches to interactive electronics involve some degree of improvisation (if not total) on the part of the human performer, and some of the electronic systems that are created and used by these composers likewise have drastically varied (read "improvisatory") output from one performance to the next. However, there are also many instances of more structured and controlled approaches to the genre, and even in pieces that involve high degrees of improvisation, the works generally have a clear identity, "defined not only the the soloist's predetermined range of activities but also by the composer's preprogrammed [range of responses] to them" (Schwartz, 1993).


Chris Chafe's Push-Pull

Chris Chafe's Push-Pull (on Chafe's Arco Logic CD, Centaur Records, 2001) for celletto and digital electronics is a good example of interactive electronic composition. It was composed in 1995 with the support of a National Endowment for the Arts (NEA) Fellowship. The "celletto", Chafe's own handmade invention, is basically just the cello's version of a keyboard synthesizer: rather than make sounds acoustically, the celletto needs to be output electrically, rather, to loudspeakers through a patch cord. However, it can also feed its sounds electronically through a computer or other digital devices for real-time processing and sound synthesis via MIDI. The piece also involves a piece of equipment known as the "Lightning" remote sensor: this is a small baton that translates motion within the three-dimensional optic field of a remote-sensor box into MIDI data that represents its ongoing position in that optic field. The Lightning rod is either attached directly to the cello bow or to the performer's bow-playing wrist and is used to relate the bow's position in the optic field to the computer throughout the performance of the piece.

Chafe programmed his own software application for the piece on NeXT hardware that interprets input from both the Zeta cello and the Lightning, using these inputs to formulate control signals sent to a Korg "Wavestation" sound processor and sample processing unit that (a.) affects the cello's timbral output in various ways, (b.) triggers synthesized sounds from the Wavestation, and also (c.) triggers live samples of the soloist playing the cello as well as other samples. The different timbral effects that are employed in dynamic ways throughout the piece are amplitude modulation, panning of the various sounds between left, right, and center locations in the stereo field, and also reverberation. Sine waves are output from the Wavestation in various controlled ways, providing a purely synthetic component to the piece's output at some points (i.e. elektronische Musik), while samples of flute sounds are similarly controlled through the Wavestation at other points (i.e. musique concréte). The Wavestation also stores ongoing tables of recorded samples of what the performer is playing on the Zeta cello and uses these concréte elements as well at various points in the piece.

The rhythms, durations, loudness levels, and pitches/transpositions at which the samples and sine tones are output—i.e. the musical score by which these different sounds are employed by the computer—are algorithmically self-generated by the program in real-time (see essay on algorithmic composition). Specifically, the program uses an ongoing non-linear dynamic system expressed as an iterated map equation whose output is then used to control and determine the various musical parameters throughout the piece (Chafe, 1999). What is most important to note about such a system, a system that employs "chaos theory," is that its output is always slightly different and constantly evolving (i.e. "dynamic") in noticeably patterned but also unpredictably-varying (i.e. "non-linear") ways (see essay on chaos theory and computer music). These dynamically evolving patterns and motifs are used to control the electronic elements of the piece. They are not output, however, unless triggered by the cellist, and the motifs themselves depend on the position of the cellists bow (as related to the computer via the Lightning rod).

These are all the ingredients and techniques, then, used in Push-Pull to enact an ongoing interactive tug of war (i.e. "pushing" and "pulling") between the human cello performer and the various "live"/real-time computer-generated digital electronic effects and algorithmically derived musical materials.

The human performer is provided with a score, but improvisation is encouraged and certain inventions are required: the magnitude of these inventions can be large, like free improvisation, or modest, like a chamber music part, depending on the performer's preference (Chafe, 1999). From one performance of Push-Pull to the next, the output of the electronic parts, furthermore, is clearly very similar in character, though inexact. The nature of what occurs electronically follows a fixed sequence of processes, giving the piece a definite and recognizable form even if the product of each of these separate processes is somewhat different from one performance to the next.

The tight interaction between cello and the live electronic parts in Push-Pull can be compared, I think, to Pierre Boulez's Repons, a landmark composition in the field of interactive electronics realized at IRCAM in the 1980s. Electronic transformations of soloists' sounds include time- and pitch-shifting, modulation (of each other), and rapid spatial movement, similar to the effects used in Push-Pull. The idea of "response" conveyed by the title is central to the piece and similar, too, to the meaning implied by "push-pull": the 24-member ensemble and various soloists respond to each other, and the soloists and computer system do likewise (Schwartz, 1993).


Conclusion

In conclusion, the field of interactive electronics is still somewhat in its infancy and much research is still being pursued about the creation of more "intelligent" and increasingly "human-like" responses from the computers and electronic devices. Chafe's use of non-linear dynamic equations—i.e. chaos equations derived in the last 40 years from science's physical observations of the non-linearities in the natural world—to derive musical patterns with unpredictable and natural-sounding variation is one attempt being currently fashioned by many computer music composers to achieve such life-like responses (see essay on chaos theory and computer music). Another trend to watch for in the future is "a growing reliance on artificial intelligence, which will become increasingly important in the real-time interface between artist and machine, as composers look for more sophisticated, 'intelligent' responses from computers. A current leader in this trend is composer/performer Tod Machover" (Schwartz, 1993).

~end~


References

top of page



©1999, john a. maurer iv