Since the late 1960's, much of the compositional work at CCRMA has involved a software environment which evolved from the Music V program, which was originally developed at Bell Labs by Max Mathews and his research group. The hardware and software has changed and improved greatly over the decades. Ported to a PDP10, Music V became the Mus10 music compiler system and played scores composed in Leland Smith's SCORE language. The compiler was replaced in 1977 with dedicated synthesis hardware in the form of the Systems Concepts Digital Synthesizer (built by Peter Samson and known as the ``Samson Box''). The Samson Box was capable of utilizing many types of synthesis techniques such as additive synthesis, frequency modulation, digital filtering and some analysis-based synthesis methods. The PLA language, written by Bill Schottstaedt, allowed composers to specify parametric data for the Samson Box as well as for other sound processing procedures on the PDP10 mainframe (and on its eventual replacement, a Foonly F4). On April 3, 1992, the Foonly and Samson Box were officially retired. CCRMA has transitioned to a network of workstations (Intel based PCs, SGI's, and NeXTs) running Linux, Irix, and NEXTSTEP operating systems. The functionality of PLA exists now in the form of Common Music (CM) (written in Common Lisp by Rick Taube), a software package that can write scores by listing parameters and their values, or by creating algorithms which then automatically determine any number of the parameters' values. Common Music (CM) can write scores in several different syntaxes (currently CLM, CMN, Music Kit, MIDI, CSound and Paul Lansky's real-time mixing program, RT). The scores can then be rendered on workstations using any of the target synthesis programs. For example, CLM (Common Lisp Music, written by Bill Schottstaedt) is a widely used and fast software synthesis and signal processing package that can run in real time on fast workstations.
Continuity has been maintained over the entire era. For example, scores created on the PDP10 or Samson Box have been recomputed in the Linux and NeXTStep computing environments, taking advantage of their increased audio precision. To summarize all these names for CCRMA's composing environment, the synthesis instrument languages have been, in chronological order, MUS10, SAMBOX, CLM/MusicKit and the composing language succession has been SCORE, PLA, Common Music . Other computers and software are also used for composition. Several composers have realized pieces which make extensive use of MIDI equipment. Readily available commercial software for manipulation of digital audio has brought renewed interest in real-time control and computer-based ``musique concrète.'' The programming environments being used for composition and developmental research include MAX, Patchwork, Smalltalk, Common Lisp, STK, C/C++, and Pd.
Since its beginning, works composed at CCRMA have been highlighted at music festivals, concerts and competitions around the world. Compositions realized at CCRMA have been performed at nearly every International Computer Music Conference; at the Society for Electro-Acoustic Music (SEAMUS) in the U.S.; at the Bourges Festival of Electroacoustic Music in France; at ISCM World Music Days; at The Warsaw Autumn Festival in Poland; at the Computers and Music Conference in Mexico City; at the Primera Muestra Internacional de Musica Electroacustica in Puerto Rico; and in concerts throughout the world. Compositions from CCRMA have also won major electroacoustic music prizes over the years, including the NEWCOMP contest in Massachusetts, the Irino Prize for Chamber Music in Japan, the Ars Electronica in Austria, and the Noroit Prize in France. Works composed at CCRMA have been recorded on compact disks by Mobile Fidelity, Wergo, Harmonia Mundi, Centaur, Allegro, and others. CCRMA is publishing with Wergo/Schott Computer Music Currents, a series of 14 CDs containing computer music by international composers. Computer Music @ CCRMA, volumes one and two, represent music production by twelve composers working at CCRMA during the period 1992 to 1996.
Recent compositional works realized at CCRMA include the following:
Commissioned by the Hudson Valley Chamber Music Circle.
Commissioned by Stanford University's Bio-X program.
Performed at Stanford, New London and New York.
The Language of Pilots explores a variety of different textures available from a minimal percussion setup (snare drum and hi-hat). One such texture is defined by rapidly switching the snares on and off. Another is signalled by glissandi, produced by pressing and dragging a stick against the drumhead. A third emphasizes continual timbral variations with the hi-hat. Each texture (there are ten in all) was composed independently of the others - derived from its own set of rhythmic principles, and evolving according to its own strategy.
In the finished work, these different textures are overlaid, and often appear simultaneously. In some instances the various rhythms and timbres interlock into a polyphony; in more dense situations, the different layers interfere with and deform one another. The music oscillates between simplicity and complexity as layers switch in and out, moving back and forth between a single texture and a multiplicity of different types. Structural markers help to orient the listener to the passage of time - a very slow polyrhythm is played on the bells of the two hi-hat cymbals, and a fixed sequence of timbres (with frequent rhythmic unisons between cymbals and drum) is repeated and expanded over the course of the work.
The Language of Pilots is dedicated to Chris Froh.
The Labyrinth software project reflects my growing interest in structuring and designing improvisations. In my first such work, Maxwell's Demon, a conductor establishes the durations within which the members of a quintet improvise freely. In Labyrinth, the design is social, rather than temporal. The operator of the Labyrinth software structures - and sometimes short-circuits - the communications among an ensemble of improvisers, guiding the performance of the group without controlling it.
Labyrinth was developed as an application for Chris Chafe's systems for the near-real-time transmission of digital audio over high-speed networks. The performers are arrayed in a variety of different physical locations (from isolated rooms in a single building, to different continents), and listen to one another as they play into the network via microphones. The Labyrinth operator is at the conceptual center of the network, processing and distorting the various performances, and controlling whose performances are heard at each location. This hub-and-spokes arrangement makes unusual ensemble relationships possible. For instance, the operator might establish a game of "telephone," where the first performer is listening to a second, the second hears only a third, and the third musician is responding to the first, with each of the performers defining "listening" and "responding" in their own way. Because each location has its own unique mix, there is no single "global performance," but rather a variety of related musical events.
The Labyrinth software is designed in response to a provocation from David Tudor, who wrote: "I want to find ways of discovering something you don't know at the time that you improvise.... [One way] is to play an instrument over which you have no control, or less control than usual." Just as the musicians relinquish some control over ensemble relationships, the operator steers the Labyrinth software without commanding it; a variety of random and algorithmic processes make moment-to-moment decisions about the processing and mixing. The system and its operator become responsible and equal partners in the improvisation.
Cy Twombly's Hero and Leander, a triptych of abstract paintings, encapsulates an experience of time in a fixed work. The surging gestures of the first panel suggest the drowning of the eponymous couple, while the increasingly muted palette and textural effects of the second and third panels evoke the calming waters of the Hellespont as the tragedy concludes.
Twombly's work expresses a dynamic event through a static medium. This piece responds by inverting the painter's logic, seeking to create an experience of immobilized time in the dynamic medium of music. A small set of materials, including howling feedback and noisy, purposefully degraded recordings, are sculpted by complex dynamic changes, then cut into fragments and separated by neutralizing silences. Different patterns of sounding and silence are imposed on each type of gesture, resulting in both densely overlapping and extremely spare textures. These procedures create tensions between stasis and motion, and encourage the listener to hear the different materials from multiple perspectives, as though looking at a painting from several angles.
Hero and Leander is dedicated to Charles Boone and Josefa Vaughan.
Shuttle, for viola solo, is the last work in a sequence of three solo pieces; each of these pieces serves not only as a solo, but also as one part of a string trio titled Weave. In the trio, the viola serves as a mediator between the fiercely independent violin and cello parts, drawing them together over the course of the work.
Played alone, the viola solo still conjures and refers to the larger trio. As the work opens the viola oscillates between the polar attractions of the shifting, angular lines which refer to the violin's music, and the rotating, repetitive elements which characterize the cello. Gradually, the viola braids these disparate features into a unified whole, discovering and constructing its own broadly flowing music as a result of the encounter with these contrasting foils.
The ensemble is involved in a tightly-conducted quintet whose fifth member is a through-composed computer part. All micro- and macro-time constructs follow a plan from simple contrasts based on sensitivity to initial conditions, worked out over the 12 minute duration. The 2000 beats in the piece trace extremes of tempi with rapid accel / decel shifts and are held together in performance by an animated conductor on the DVD (created by Greg Niemeyer). Computer sounds are derived from Organum textures, re-purposed in this work in a beat pattern / scratching method. Players can rehearse their parts in isolation with copies of the DVD.
Organum is a computer graphics animated film which establishes a symbiotic relationship between its synthetic images and its soundtrack. Composer Chris Chafe uses data from the digital images to generate the sounds made by the characters voices and by their movements through the world. Likewise, animators rely on sound data to generate and to add nuance and expression to movement. In this way, the film avoids traditional cinematic privileging of image over sound, and insists instead on their mutual effect and interpenetration. Organum explores what it means to be alive by introducing us to a world inhabited by flying organic and mechanical lungs that cannot see, but use sound to communicate and navigate. Although these creatures may seem visually alien to us, they remind us that knowledge is not reducible to visual or quantitative systems of knowing, but must be understood as a fully embodied world-sense. Organum bends the horizontal relationship between viewer and screen, offering instead a new axis of vision that swings and spins like a gyroscope, so that suddenly, we find ourselves face to face with our own visceral bodies that encounter the world through a constant exchange of air and breath and waves of sound. In addition to producing the film for a conventional cinema screen, Greg Niemeyer, Chafe, Christine Liu and Lorenzo Wang will premiere a version of the film in University of New Mexico's 180-degree dome theater. Furthermore, in conjunction with the film projects, the team has been working to develop the universe and characters of Organum into a computer game in which players interact and progress through the game by learning how to use their characters voices.
Paititi is a multidisciplinary project encompassing historical research, software development, and artistic creation. The main objective of this project is the elaboration of a work involving literary, visual and sonic elements inspired and documented on the historical records of the legend of El Dorado.
The temporal frame for this piece is the Spanish-Inca war (1532-1572). The geographical region includes the paths taken by the first Spanish explorers and some of the important Inca cities, such as Cuzco, Quito and Cajamarca. The visual material includes reproductions of historical documents from the Archivo General de Indias (AGI) in Seville and original footage taken at the sites of the explorations. The sonic material features recordings on site and oral reports by people from the region. The literary material emcompasses documents written by the conquistadores, oral reports by aboriginals, and original prose inspired on historical facts.
The composition will be created by means of digital processing and synthesis of environmental sounds modelled after the collected material. The format of the piece will be an installation space combining video, still digital images, and multichannel sound. Two types of software will be developed for this work: sound synthesis and multichannel spacialization, and interactive controllers for triggering images and sounds in the installation space.
Fernando Lopez Lezcano
iICEsCcRrEeAaMm is a beta, er.. I mean alpha version of a new multichannel tape piece I'm still working on. As in the software world, Marketing informs me that in future versions bugs will be squashed and new features will be added for the benefit of all listeners. iscream refers to the origin of most of the concrete sound materials used in the piece. Screams and various other utterances from all of Chris Chafe's kids were digitally recorded in all their chilling and quite upsetting beauty. They were latter digitally fed into the "grani" sample grinder, a granular synthesis instrument developed by the composer. ICECREAM refers to the reward the kids (and myself) got after the screaming studio session. The piece was composed in the digital domain using Bill Schottstaedt's Common Lisp Music. Many software instruments and quite a few other samples of real world sounds made their way into the bitstream.
Feather Rollerball is a live piece for piano, Radio Baton and Scanned Synthesis. This piece was composed using CM and Max Mathews' Scanned Synthesis program on the Linux environment.
This is a multichannel piece using Expression Modeling with the Physical Model of the Bowed String not in real time. In this case the advantage of a rendered sounds results in musical gestures which are unique but carry-on with the timbral characteristic of bowed string instruments. This piece was composed using CLM on the Linux environment.
Pipe Dream is the second in a set of compositions exploring subtle wind instrument overblowing effects. In this work, all sounds are generated using real-time computer-based saxophone-like physical modeling algorithms implemented with the Synthesis ToolKit in C++. The algorithms are performed with a new MIDI wind controller called The Pipe. The controller makes use of a variety of sensors, including buttons, potentiometers, and accelerometers which respond to breath pressure, finger pressure and tilt. Spatialization effects in a four-channel sound environment are created through various panning strategies.
10five1 was created during the composition of my larger work, portfoliosis. I took all the works in my portfolio to date (encompassing a seven year time span) and spliced them end-to-end in chronological order. The resulting file was then reduced into smaller 'shortenings' by a granulation routine. In 10five1, these shortenings of my portfolio are presented in ten seconds, five seconds, and one second. In creating 10five1, I was curious to see if macro-trends were evident in my combined work over time, and if these trends would become more noticeable when reduced to a few seconds in length. Shortenings of various lengths, including excerpts of a one minute shortening, are also included in portfoliosis.
This work is based on an interview conducted with my wife. In it, she comments on the various works in my portfolio, as well as her feelings toward the sound and culture of electro-acoustic music. All sounds and textures in the work are either derived from samples of the interview or samples of works discussed in the interview. Included in the piece are 'shortenings' of my entire portfolio (see 10five1 notes above for an explanation of the shortening process). portfoliosis was composed using SuperCollider and mixed into eight channels with Csound.
|© Copyright 2005 CCRMA, Stanford University. All rights reserved.|