The following composition entries were part of past versions of the CCRMA Overview and are being maintained here for historical purposes. This list is not comprehensive.
Both are tape pieces using spectral modeling (SMS), sampling and granular synthesis in CLM and CM's Lisp environment. Piece of Mind was awarded "Premio Sao Paulo '95", Brazil; recorded on a CD released by the II SBCM and CRCA (UCSD) in 1995.
All blue ... was composed in 1996 for four-channel tape. This title was drawn from the writings of Walter Smetak (composer, instrument-builder, cellist and writer) to whose memory the piece is dedicated. The piece is about sound transformation, as a metaphor to the transformation of consciousness. Metallic percussion sounds are ever-present, while original cello sounds are broken into their rawest components. The basic cuisine for the piece was set up from these spices, and the dish is to be served hot. The cello has its identity transformed: its defining harmonic series is turned inharmonic, sounding closer to the metallic percussion. The pitches from this now bent, inharmonic series, are used as framework for a melodic-timbral game (the ``blue pencil on a blue sky'') played by cello and percussion. The cello transformations were obtained with SMSplus, a CLM system built on top of Xavier Serra's Spectral Modeling Synthesis and developed by the composer. A procedure for modeling the physical properties of a room via feedback-delay-networks was employed (``Ball within a Box'', developed by Italian researcher Davide Rocchesso at CCRMA, with additional enhancements by the composer). All blue... won the 1997 ``Premio SaoPaulo'' at the 2nd International Electroacoustic Music Competition of Sao Paulo, Brazil and is available on volume one of the Computer Music @ CCRMA CD.
Monologue for Two for flute and clarinet (1993, revised and scored in 1997) is an investigation into unusual 'everyday life' facts. According to it, there is one day when you can't recognize an intimate friend, or you may suddenly realize you've become intimate to a most hostile enemy. Inasmuch, a dialogue can turn into monologue while still involving two players. The basic pitch materials in Monologue for Two were generated by computer programs in Daniel Oppenheim's Dmix environment for composition. The piece is dedicated to the memory of composer Ernst Widmer, who was quite aware of those everyday life 'compositional' facts. The piece is part of a cycle which also includes Dialogue in One, for piano. Monologue for Two received its first performance on Nov. 13th at the CCRMA 1997 Fall Concert in Campbell Recital Hall, Stanford University. The piece was performed by Karen Bergquist (flute), and Larry London (clarinet).
Girltalk is about my infatuation with children; what they think, and how they perceive our modern world. Children provide a fascinatingly uninhibited view, quite outside my adult reference, so I am forced to see things in a new light. This is the first of a series of related compositions using material from the world of children. This music was created algorithmically, using CM and CLM in a Linux environment (though the sound-sculpting was done on a Macintosh) during the summer of 1997.
Live interactive improvisatory piece by Nick Porcaro and David Rhoades, where the performers improvise in harmolodic manner over several phrases of jazz-based material. Running on several NeXT computers, the SynthBuilder application is used as a real-time effects processor and Physical Modeling (PM) synthesizer. Both the performers and a sound engineer have control over the effects. alt.music.out are: Emily Bezar - Processed vocals; Roberto DeHaven - Processed drums, saxophone; Scott Levine - Effects processing; Nick Porcaro - Processsed grand piano and PM piano; David Rhoades - Processed saxophone; Pat Scandalis - PM electric guitar, electric guitar; Tim Stilson - Effects mixing. Premiered at the CCRMA Annual Summer concert, July 1995.
Commissioned by the Calliope duo, tangents originates from my research in sonification. Methods similar to the ones we used to sonify stock market data were used to construct the tape part. The tile refers to nature of the interaction between the 3 elements of the piece.
This single movement piece for 15 musicians is related to some electronic studies/sketches I did in 1999. The piece narrates a process of clarification in both the frequency time domains.
Des Silences, Des Nuits is an ongoing project for baritone Nicholas Isherwood and 4-channel or stereo tape. Based on selections from the writings of Rimbaud, the first of these songs was premiered at CCRMA in May of 2002.
The three movements of this piece were conceived as windows into a larger, single movement quartet taken, as it were, from the beginning, the middle and towards the end of that imaginary quartet. In the first movement each player presents a different musical idea with little or no regard to the others. In the second and third movements these materials begin to interact and intermingle. Thus the tension between the player and the group, which I believe is inherent to the string quartet, provides the conceptual foundation for this piece.
This is a revised and expanded version of a piece I started writing in 1996. The first version was performed in Jerusalem and the current version is scheduled for concert at Stanford, Feb. 15 1999.
Performed at Stanford, New London, and New York.
Composed for the Daniel Pearl memorial music day. Performed by the St. Lawrence String Quartet. Adopted by the Pilobolus Dance Company.
Commissioned by Livia Sohn. Performances include Atlanta, Washington DC, Mexico.
Performed in New York, Paris, Tel Aviv, Stanford.
Commissioned by the St. Lawrence String Quartet. Over 60 performances throughout the world.
Premiered at the United Nations General Assembly, January 24, 2001.
A sound installation and collaboration with Dale Chihuly. Commissioned by the City of Jerusalem and Chihuly Studio for Chihuly in the Light of Jerusalem, 2000. Echoes of Light and Time was heard by over two million visitors and received international attention. The CD adaptation of the work was released by Vitosha/Sony in January 2001.
Commissioned by the Rockefeller Foundation and the Institute Nacional de Bellas Artes, Mexico. Performances: Centro Nacional de Bellas Artes, Mexico City - April 4 1998; Callejon del Ruiz Festival, Guanajuato, Mexico - April 7 1998; Mind Machines and Electronic Culture: The Seventh Biennial Symposium on Arts and Technology, Connecticut College, March 3; Stanford University, May 24th 1999.
Weave is a string trio composed of three independent solos: Weft for violin, Shuttle for viola, and Warp for cello. The violin and cello pursue radically divergent paths, the violin disruptive and mercurial, the cello obstinate and repetitive. The viola, the only part aware of the history and future of the trio's music, moves back and forth between violin and cello materials, gradually braiding the ensemble into a unified whole. Guided by the mediating presence of the viola, the profusion of solos proves to be a conversation between three voices.
The Location of Six Geometric Figures takes its title from a work by Sol LeWitt, in which areas are compulsively measured and subdivided in order to ``locate'' simple geometric forms like squares and triangles. The innocuous result is the product of a labyrinth of obsessively documented calculation; the trappings of order cannot disguise the ultimately irrational nature of the project.
I was inspired by the way LeWitt works out simple schemes to an exhaustive end, and then lets the systems interfere with one another to produce complex results. My work combines several such mutually interfering grids. The durations and subdivisions of the six large sections of the piece, accelerated dramatically, were used to generate the rhythmic details of the work. An independently derived series of bar-lengths, with its own set of repetitions and variations interrupts the flow of time. Instrumental combinations (a solo playing against two independent duos, a single trio) were permuted exhaustively, then re-ordered from ``playing independently'' to ``playing together.'' Behind these overdetermined systems, the abyss is waiting.
Consider the oxymoron of composing for improvisers: I wanted to create a piece as open as possible, so that a talented group of improvisers would be given the opportunity to exercise their skills. At the same time, I wanted to compose: to write a piece with a fixed identity, something recognizable from performance to performance. I hit upon the idea of composing only the durations of the piece. Entrances, exits, and the lengths of phrases are all specified, so precisely that a conductor is necessary. But that's all the score contains - the rest is up to the improvisers, who are more than equal partners in making the music. Maxwell's Demon is dedicated to Matt Ingalls.
Xerox Book, for piano and percussion, is a pendant to my sextet The Location of Six Geometric Figures. In several movements of the duo, extracts from the larger work are molded and twisted through a variety of idiosyncratic transcription techniques. In other movements of Xerox Book, newly composed materials were subjected to similar processes of compression and distortion. In most of the larger movements, there were several generations of transformation before the music reached its final state - just as a sequence of photocopying will gradually distort an image into something new and unrecognizable.
In Bali, a ``gineman'' is the introductory section of a longer work for gamelan gong kebyar, characterized by fast, angular phrases. While Gineman is certainly not a piece of Balinese music, it borrows the stop-and-start rhythms and unexpected outbursts of its namesake. My perceptions of Balinese modality and rhythmic cycle are also important to the piece, although in a more abstract fashion.
The harpsichord, closely associated with the European Baroque, may seem an unusual vehicle for music inspired by Bali. However, the two manuals and flexible tuning of the instrument enabled me to approximate the paired tunings that characterize the shimmering soundworld of the gamelan. Gineman is dedicated to Mary Francis.
Fabrication begins with a series of fragments: isolated elements of trumpet technique, like breathing and tonguing, are presented divorced from ordinary playing. The acoustic study of the trumpet continues with other splinters of material. Natural harmonics are used to produce distortions of pitch and timbre, and the performer creates further acoustic disruptions with mutes, and by singing into the instrument while playing. Eventually a more normal trumpet technique emerges from the shards, and a kind of calm is achieved. If the piece begins by metaphorically constructing the trumpet from the components of technique, it ends with a more literal disassembly.
While Fabrication is obsessed with trumpet acoustics, it is entirely dependent upon electronics. Many of the sounds used in the piece are too quiet to be heard in performance. And so the microphone serves as a microscope, revealing otherwise inaudible sounds. The electronics gradually take on an active role as well - transforming and extending the sound of the trumpet beyond its acoustic limits.
In the 1920s and 30s, New Orleans jazz traveled the world. One of the places where it touched down was Batavia, a region on the outskirts of Jakarta, the capital of Dutch colonial Indonesia. Local jazz bands performed across the region, while 78 records like the Louis Armstrong "Hot Five" and "Hot Seven" sides were broadcast on the radio. The 78 made global musical transmission possible to an extraordinary extent.
Today, the tanjidor and gambang kromong musics of Batavia present a striking fusion: New Orleans jazz played on traditional Indonesian and Chinese instruments. Or is it jazz musicians trying to reproduce the sounds of Javanese and Sundanese gamelans? It's difficult to say.
78 continues this cycle of hybridization, bringing elements from tanjidor into my own musical language in a piece which would fit on the two sides of a 78 record. The tightly woven counterpoint, multiple melodic idioms, and structural cycles I've borrowed from tanjidor are recreated here in very different form. But think of the ensemble as a jazz clarinet, a Chinese fiddle, and a set of tuned percussion, and you'll begin to get the idea.
Questions and Fissures explores the fusion of divergent musical elements. The two loudspeakers present independent voices, kept separate throughout the piece, while the saxophone provides another layer, distinct from the electronic world. Each element pursues its own path of development, corresponding with the others only at the broadest levels of form. In spite of all the ways in which these materials attempt to achieve independence, we hear one piece, and not three - each layer informs and enriches our hearing of the others.
This piece is the second in a series of works which use speech sounds as both timbral material and organizing forces. The electronic component is composed entirely of heavily processed recordings of my speaking voice. While the speech never quite becomes intelligible, it animates the sound and provides rhythmic structure. In turn, the saxophone part is derived from a rhythmic transcription of the spoken text. Like the speech sounds themselves, the transcribed rhythms never appear intact. Instead, I use them as the basis for a series of variations and distortions.
Questions and Fissures is dedicated to Matthew Burtner.
Many of the sounds in Strain are barely audible, the details just beyond reach. Others are noisy, marginal, the kinds of things composers usually work to exclude from their pieces. Perhaps here they find their place.
Strain is based almost entirely upon recorded speech. I chose to camouflage and obscure this material for a number of reasons - not least because I wasn't willing to listen to recordings of my own voice over and over again while working on the piece. If the texts leave only indirect traces of their presence, they animate the music nevertheless, creating the rhythmic structures and sonorities of the composition.
Strain uses its four loudspeakers as a quartet of voices, producing a coherent sense of ensemble. An artificial space is not a goal of the piece, and there is no panning or reverberation of any kind. The loudspeakers are in no way "humanized" through this procedure, but I feel that their material presence becomes an explicit feature of the piece.
Escuela is the second in a series of piano pieces which somehow refer to places where I've lived - in this case, my first home in California, on Escuela Avenue. In this case, the piece is (almost inevitably) bound up in my early experiences as a graduate student, thereby enriching the double meaning of the title.
In Escuela, a computer is employed to modify the sound of the piano in real time. The performer controls the software from the piano keyboard, applying ring modulations which precisely reflect the pitch structure of the original piano music. The result is a kind of mirroring - at a microscopic level, the electronics describe the piano's music in the way that they alter its sound.
Calyx is the most recent in a series of pieces exploring the possibilities of a set of filters connected in an infinite feedback loop. No initial stimulus to the filters is necessary - the internal noise of the system, amplified endlessly via the loop, is sufficient to produce a rich palette of sounds.
The sonic results of this process are entirely context-dependent; the musical possibilities at each moment are limited by the state of the loop at the previous moment. As the filter settings are changed in real-time,the sounds produced by the loop can be shaped into musical forms. Calyx is a recording of such a real-time "performance," controlled by computer and refined over a period of months.
C. Matthew Burtner
Ukiuq Tulugaq, a multimedia electroacoustic work for instrumental ensemble, surround sound electronics, dance and movement art, video projection and theatrical staging was performed on March 28, 2003 in Charlottesville at the University of Virginia. The piece is based on ecological and anthropological studies of the Arctic. In Winter Raven, natural forces such as sun, wind and ice take on dramatic musical personas in a dream-like sequence of staged movements. The composition metaphorically connects an Inuit creation story, in which the World is created by Raven (Tulugaq) from snow, with the seasonal approach of winter.
S-Morphe-S explores the coupling of a disembodied soprano saxophone with the virtual body of a singing bowl. The saxophone signal is used as an impulse to the physically modeled bowl. The result is a hybrid instrument with the articulatory characteristics of a soprano saxophone but the body of a singing bowl. The saxophone uses varied articulations such as key clicks, breath, trills and sustained tones. The shape and material properties of the bowl are varied in real time creating a continuously metamorphosizing body. In Greek Morphe means form and in Greek mythology Morpheus was the god of sleep, of disembodied forms. The english word commonly used for a transformation between two objects is morph, a shortening of metamorphosis, derived from the Greek. The title of this piece is meant to evoke all of these meanings - dreamed images, transformative bodies, and disembodied forms. S-Morphe-S was first performed in Pitea, Sweden in 2002.
(dis)Appearances, a trio for amplified acoustic violin, electric violin, and computer violin/multicontroller, explores the nature of disembodiment and physical acoustic reality through the use of computer controllers and physical modeling synthesis. The piece is scored for a string trio in which the ensemble is not defined by register (as with a traditional string trio) but by states of embodiment/disembodiment. (dis)Appearances was first performed in Venice, Italy in 2003.
The form of Polyrhythmicana is generated by macro-level geometric rhythmic relationships arising from the interplay between the individual instrumental lines. In order for the performers to follow the constantly changing tempi (which are both independent from and closely related to one another) a computer-program was created that generates independent multichannel click tracks under one global clock. The piece is in five movements each with a different rhythmic organization: I: Metal YX; II: Split/Joined Diamonds (in Wood); III: C Acceleration Phase; IV: Slow 2:3 (in Noise); V: Melody Triangles. Polyrhythmicana was first performed in San Diego by Ensemble Noise who commissioned the work.
Snowprints (2002) for flute, cello, piano and electronics explores snow both metaphorically and sonically. Snow relates to bodies through the analogy of ``impressions'' or ``prints''. In snow, prints of bodies are captured and transformed by wind and changing temperature. The wind leaves impressions in the form of drifts; changing light creates shadow prints on its surface; and animals seeking shelter also leave fading tracks.
In Alaska, photographs, video and recordings of snow were gathered. Many different movements were recorded in varying snow conditions. The sounds were then mixed into the electronic part, combined with three digital prints of the acoustic trio. The digital prints were created from a Scanned Synthesis string (by Max Mathews), a Physical Modeling Synthesized flute (controlled by a Theremin in Miller Pucket's PD), and a Granular Synthesis piano. Macintosh and Linux computers were used to create the piece. The orchestration of the composition is thus an acoustic trio of flute, cello, piano; and a digital trio of flute, cello, piano. The expressive noisy sounds of the snow bind the sonic world, creating a background environment for the instrumental/digital prints. The video uses images of the snow prints and the city lights of Anchorage.
Snowprints was commissioned by Trio Ascolto with support from the German Ministerium of Culture, Heidelberg. It was first performed in Munich, Germany in 2003.
Somata/Asomata (2002) for electric string quartet and computer-generated sound deals with notions of embodiment and disembodiment, questioning the musical perception of physical and non-physical reality. This is accomplished in the musical context through the use of electric and computer-generated instruments.
Somata/Asomata extends my work in compositions such as Animus/Anima for voice and electronics, Snowprints for flute, cello, piano and electronics, and S-Trance-S for computer metasaxophone. In these pieces, the performed instruments are treated as ``real'' or embodied states and the electronics are used to extend the notion of corporeality by transforming performative qualities of the instruments through sound synthesis. In Somata/Asomata, the acoustic instruments are not really acoustic but are actually electric instruments, played exactly as their acoustic counterparts by the performers. The sound is electroacoustic, and originates from speakers rather than from the bodies of the instruments. The computer-generated sound also originates from the speakers and this creates a dialectic between instrument as body and instrument as sound synthesis.
Somata/Asomata was commissioned by Musik i Nordland for the MiN Quartet.It was first performed at the ILIOS Festival in Norway in 2003.
Commissioned by Haleh Abghari. First performed at Stanford.
For Stefania Serafin, this piece explores new expressive possibilities arising from instrument controller substitution (Burtner, Serafin 2001). First performed at Mills.
The electric saxophone is an ongoing part of the metasaxophone project involving microphones imbedded inside the saxophone, distortion, and chaotic feedback systems. First performed at CCRMA and CNMAT with Earsay.
A new CD, ``Portals of Distortion: Music for Saxophones, Computers and Stones'' (Innova 526), was released in February 1999. The Wire calls it ``some of the most eerily effective electroacoustic music I've heard;'' 20th Century Music says ``There is a horror and beauty in this music that is most impressive;'' The Computer Music Journal writes ``Burtner's command of extended sounds of the saxophone is virtuostic ... His sax playing blew me away;'' and the Outsight Review says "Chilling music created by an alchemy of modern technology and primitive musical sources such as stones ... Inspired by the fearsome Alaskan wilderness, Burtner's creations are another example of inventive American composition.''
Joanne D. Carey
Adventures on a Theme for flute and radio-baton is a flute concerto whose synthesized orchestra includes singing voices, marimba, percussion and guitar as well as strings, woodwinds and brass, often in unusual combinations. Although there is no story-line associated with it, the music seems to tell a story in its winding and wayward lyricism. The middle movement, ``Topsy Turvy and Haywire'', which is entirely improvised, presents another view of the protagonist, the theme. From a compositional standpoint, techniques of variation are explored in each movement, all of which are based on the same theme. The improvised middle movement is based on original programs by the composer which explore ways of varying a pre-composed melody in real-time with the batons. Ideally, the radio-baton and flutist would improvise together. This was realized in a recent performance at the Palo Alto Cultural Center in December 1998, as part of a NACUSA concert. The piece was premiered in San Diego on October 10, 1997 at the Fourth Annual International New Music Festival, sponsored by the University of San Diego. At its premier, the radio-batonist performed a solo improvisation.
The last interactive piece of a trilogy for soprano and radio-baton. Gracias, as well as its companions La Soledad (1992) and Aqui (1993), was inspired and influenced by Spanish Flamenco and indigenous South American music, and the later poetry of Chilean poet Pablo Neruda. The spirituality and humanity of this great poet continues to impress the composer deeply. In the process of blending Neruda's poetry with the rhythms, flourishes and instrumental sound of these Spanish and South American musical traditions, Joanne Carey drew mainly from strains of solitary meditation and deep sorrow buoyed by irrepressible exuberance and hope. The scores of the electronic accompaniments were created on a MacintoshIIfx using the DMIX composition program developed by Daniel Oppenheim. The sound material for these songs was generated on a Yamaha SY77. Most of the voices are presets, with the exception of the bell sounds and a couple of hybrid sounds that were constructed by the composer and a "sliding sigh" sound developed by Dr. Oppenheim.
The composition has been widely performed: San Jose Chamber Music Society, 1995; SEAMUS conference in Ithaca, New York, 1995; IBM Research Center, Yorktown, New York, 1995; International New Music Festival, San Diego, 1995; University of Maryland, Demo concert with Max Mathews, 1995; Radford, Virginia, Demo concert with Max Mathews, 1995; Peabody Conservatory, Baltimore, Maryland, Demo concert with Max Mathews, 1995; National Association of Teachers of Singing, Winter Vocal Symposium, 1996.
Created by composer and researcher Chris Chafe and digital artist Greg Niemeyer, Ping is a site-specific sound installation that is an outgrowth of audio networking research at Stanford University's Center for Computer Research in Music and Acoustics and interactive and graphic design experiments originating from the Stanford University Digital Art Center. Ping is a sonic adaptation of a network tool commonly used for timing data transmission over the Internet. As installed in the outdoor atrium of SFMOMA, Ping functions as a sonar-like detector whose echoes sound out the paths traversed by data flowing on the Internet. At any given moment, several sites are concurrently active, and the tones that are heard in Ping make audible the time lag that occurs while moving information from one site to another between networked computers.
Within the Ping environment, one can navigate through the network soundscape while overlooking San Francisco, a cityscape itself linked by the same networks that constitute the medium. Visitors to the installation can expand or change the list of available sites as well as influence the types of sound produced, choosing different projections of the instruments, musical scales, and speaker configurations in the surround-sound environment.
Current explorations pertaining to sound synthesis and Internet engineering are the foundation of the Ping installation. The research that led to this installation is, however, just one part of a larger effort to investigate the usefulness of audio for internetworking and, reciprocally, ways in which the Internet can abet audio.
99% pure synthesis, Transect is another study to create ``chamber music'' using the current technology. Ingredients of lines, articulations and phrasing were created by playing the synthesizer with a synthetic player whose bow arm loosely mimics the physics of the real thing. A bowed string and a throat were combined for the instrument. A year in the mountains of Alberta and California, and the mid-life interests of the composer figure into the story line, which is like the title, a section traversing.
Violist Ben Simon wondered what it would feel like to be wired into the same computer rig that I developed for my celletto piece, Push Pull. He is the first violist to be so inclined and I took that as his consent to be subjected to further devious experimental situations, from which the first version took shape. The result is an antiphonal setting, in which his two violas (one of them electronic) are paired with musical settings from the electronics. The overall setup is a sort of solo version of Ornette Coleman's Prime Time, a duo of quartets in which an acoustic group trades-off with an electronic one.
The positions of the two violas are tracked by infrared to give the soloist control over events generated on-the-fly by the computer. In creating these materials, I wanted to establish a feeling of vorticity and the approach to vorticity. Hence the name, which incidentally refers to the first real-time digital computer (from the 40's).
A second version for saxophone solo has been played by Maria Luzardo (Arg.) and Katrina Suwalewski (Den.). Its setup and form is closely related to the earlier version.
With Fred Malouf for electric guitar, tenor saxophone, celletto and computers. Premiered ICMC Thessaloniki, Greece, 27 Sep 1997. Also performed in Germany (1998) and U.S. (1999).
With Scott Walton for celletto, feedback guitar physical model, disklavier and computers. Premiered U.C. San Diego, 3 April 1997.
For celletto and live electronics. Performed in France, Germany, Argentina, China, U.S. The celletto is the cellist's answer to all the fun keyboard players have been having lately with live computer synthesis. Push Pull is a setting for an "augmented player" where the computer almost becomes a part of the performer. Instrumental gestures are amplified musically and launch off into a life of their own. The soloist sows some of the seeds of what happens and can enter into dialogue with the musical textures that evolve. The work is a study for a new piece for singer and computer sponsored by a grant from the National Endowment for the Arts.
The software system for PushPull has been applied in two small ensemble settings, both partly improvisational.
I. Elegy in Flight - for solo violin
II. Moksa - for 12 vocalists and 4-channel tape
III. Spiritus Intus Alit - for solo bass and live electronics
Requiem Moksa is dedicated to the victims of the 9/21 earthquake in Taiwan. It embraces three movements, each of which owns its distinct instrumentation. There is no pause in between these movements. The use of three distinct languages in the second movement is intended not only to present intricate timbral combinations and various sound images in composition, but to also delineate a global compassion for the subject matter.
Elegy in Flight starts with a statement of a 59-note set, which is derived from a 59-syllable Buddhist mantra used in recitation for the dead. The set subsequently expands itself through the multiplication of its own intervals and then it is compressed in register. This expansion/compression process is stated 6 times over the course of the piece with variations of speed and emphasis. The six journeys through this material denote the Buddhist ``wheel of life,'' or the 6 realms of existence chosen by the dead in their next incarnation (based upon their karmic activity). The other material in the piece is a quasi chant melody that acts as an insertion that deliberately distances itself from the turning of the wheel, and it presents an alternative to this process: the ceasing of time and the presentation of an entirely different space.
Spiritus Intus Alit, meaning ``the spirit drinks deep,'' serves as the postlude in this requiem. It speaks to the depth of the spiritual and philosophical struggle between faith in the afterlife and the finality of death. The two textual fragments are drawn from The Aeneid of Virgil, depicting a dialogue between Aeneas and his deceased father. One is to underscore the faith in rebirth through the process of metapsychosis, while the the other reinforces the finality of death. This also results in two sets of musical materials which alternate throughout the piece. A profoundly beautiful portion of Virgil's texts - translated as ``With full hands, give me lilies. Let me scatter these purple flowers, with these gifts, at least, be generous to my descendant's spirit, complete this service, althought it be useless'' - deeply expresses the sorrow loss of the living and the unchangeable ultimation of death. Toward the end this element leads to the depart of the two worlds, sung by a distinctive diphonic technique.
The making of the 4-channel tape is realized under CLM (Common Lisp Music) environment via utilization of ATS (analysis transformation Synthesis), SMS (spectral Modeling Synthesis), dlocsig (multi-channel spatialization) and granular synthesis.
Departure Tracings is the second in a series of works dedicated to the memory of my father. Each work in the series utilizes the pitches C and G# as points of departure and/or arrival (these two pitches come from my father's initials). Departure Tracings was premiered by EARPLAY on May 1, 2000. The Cal Ear Unit Ensemble gave it a wonderful presentation in October 2000 and February 2001.
The Captured Shadow pursues a theatrical aspect of live electronic music. Inspired by novels of Fitzgerald's, the piece experiments with the representation of literal meanings in music, such as "betrayal" and "emptiness." The work utilizes speech-like materials and the pitch flexibility of the soprano trombone to present a vague story-telling voice. This narrator, though often obscure, creates a context for the musical representation of literary ideas. I am indebted to Chris Burns for his help in every aspect of this work.
Soundstates explores the 3 states of matter (gas, liquid and solid) and their transformations into one another. This flowing from one sound state to the other forms the basis of the structure of the piece, to reflect a similar process in the spontaneous changes in nature. The piece begins with solid, block-like sounds which gradually disintegrate; it ends with a succession of rising, more atmospheric sounds, with a return to elements of the original material. The source sounds were mostly drawn from the marimba and were digitally processed in the CLM (Common Lisp Music) environment. Many thanks to Juan Pampin who helped me in employing CLM instruments.
Elegy is the third in a series of works dedicated to the memory of my father. Each work in the series utilizes the pitches C and G-sharp as points of departure and/or arrival (these two pitches come from my father's initials).
This piece for Chris Chafe's special instrument, the celetto, is concerned with the purification of tone. C and G-sharp are highlighted, but they are treated as anchors in a larger pitch world that expands around them. Elegy could be viewed as a complement to my third study for two pianos, which is another work in this series.
Elegy was premiered by Chris Chafe with his celleto in the CCRMA-CNMAT exchange concert in April 2000, and recently presented in the Seoul Electronic Music Festival in Korea in November 2000.
To compose a series of studies for 2 pianos has been in my compositional plans for some time. The idea is to employ the serial manipulation of pitch, rhythm, dynamics, timbre, new piano techniques, etc., to achieve less predictable results.
Study I explores the idea of two contrasting entities: long and loud notes (foreground) against short and soft ones (background). Midway through the piece, the 2 roles seem to exchange. (The 54-note series overwhelms the piece pitchwise, and a series of prime numbers, 1, 3, 5, 7, 9, 11 and 13, decides the number of rapid notes for the succession of each phrase.)
Study II presents accented notes in extremely fast ascending scales between the 2 pianos and a slow descent.
Study III, while the third in this series, also belongs to a series of pieces dedicated to the memory of my father. As in all these dedicatory compositions, the pitches G-sharp and C (his initials) are highlighted.
Delay lines, as ``counterattack'', begin by echoing only the strong notes played by the clarinet (processed through an amplitude follower) but gradually take over the performance from the clarinet during the course of five stages. The delay lines utilize various controls of delay time, feedback amount, detectable values, and pitch shifts. the clarinet sound is processed in real-time in the Max/MSP environment.
String Quartet No. 2, inspired by the relationship between soloists and accompaniment in Chinese Opera, explores the idea of two contrasting gestures: a long-sustained note against short, "shattered" figures. The long note is held almost throughout the piece while these shattering sounds try to break up the texture. Additionally, a great deal of "sul ponticello" and harmonics are employed to simulate the high-frequency, nasal singing of the soloists.
The pitch A provides a focal point to the piece. It presents itself both in long, sustained gestures and it also forms the background of the harmonic workings of the piece.
In 1999, String Quartet No. 2 won the first prize of the Young Composer Competition in the annual ACL (Asian Composer League) conference, and the first prize in Music Taipei 1999, the most prestigious composition competition in Taiwan.
Soundstates presents and explores the 3 states of matter (gas, liquid and solid) and their transformations into one another. This flowing from one sound state to the other forms the basis of the structure of the piece, to reflect a similar process in the spontaneous changes in nature. The piece begins with solid, block-like sounds which gradually disintegrate; it ends with a succession of rising, more atmospheric sounds, with a return to elements of the original material. The coda carries residual traces of preceding elements. The source sounds were mostly drawn from the marimba, played by Randal Leistikow. They were digitally processed in the CLM (Common Lisp Music) environment. Many thanks to Juan Pampin who helped me in employing CLM equipment, and to Randal's performance.
Soundstates was premiered at CCRMA in Fall 1998 and was recently performed at the International Computer Music Conference, October 1999.
Youlan, for tape (two or more channels) and syncronized slides (done by visual artist Ruth Ecland), was realized at CCRMA between March 1-22, 1997. Youlan, a winding journey of exploration, is a term derived from classic Chinese poetry and music. The word connotes elements of the excitement of discovery, the lure of the unknown, and the elevation of the ordinary to a place of peak experience. The music is the map through this world, providing both context and direction. Samplings of ancient Chinese instruments have been transformed through digital processing and manipulation to create new sound structures that are evocative of their origins. The dynamic range of this piece is widely distributed: beginning with a highly tense drama, the piece slowly quiets down to an spiritually tranquil end after a series of sound material developments. The samples of steel plate Chinese instruments were processed and mixed using Spectral Modeling Synthesis (Xavier Serra and Raman Loureiro), Common Lisp Music (Bill Schottstaedt), and Real Time Mixer (Paul Lansky and K. Dickey) on a NeXT workstation at CCRMA, Stanford.
A chamber opera in three acts for eight voices, ten instruments, and a tape with a duration of approximately sixty minutes. To be premiered by Other Mind Festival in November, 1996.
The story is based on the famous play but goes beyond the original libretto by delving specifically into the themes of identity and desire. The Ice Princess, who is also the central figure in Puccini's opera, is named Cess and is an idolized underground nightclub performer. Part of her on and off stage attempt to thwart admirers is to offer the challenge of cracking the enigma of who she really is, with the risk that the wrong answer will bring death. A gangster, new to the area, takes up the challenge and through a series of dream-like realizations discovers that Cess is a Hermaphrodite. When he reveals her identity the crowd grows enraged. The mystique of their idol has been disclosed and they retaliate by savagely murdering the gangster.
The contents of the piece concerns an unfinished childhood dream where, with the unlimited imagination of the child, a walk is taken through a colorful and unspoiled world. The piece was composed using DMIX, a newly developed software for Macintosh. It was programmed with extreme nesting patterns, forming a simple idea which then grows into a complex pattern. While composing, Kui Dong does not think excessively about tools and techniques, instead she listens for what best fits her overall concept for a piece of music, looking for the right color and shape for each sound. Purity and sincerity are truths that guide her. In Flying Apples she attempted to catch the transparent, brilliant stars falling from infinity.
Performed at Visual Symbols, San Jose; Stanford Univeristy; ICMC 1995, Banff, Canada; LIPM, Buenos Aires, Argentina.
A DMA final project in honor of the Internationational Year of the Ocean (1998) and Yemaya, Mother of the Sea.
Songs of the Sea is a cycle in ten sections with a mutable form which reflects the flexibility of the water element and having a variable length depending on which of the ten sections are performed. It is the last in a series of aural environments for a poet/photographer's mixed media installation. For this particular project, there are five sections of poetic text which are dramatically read over collages of sampled environmental sounds, algorithmically-generated sections seeded with Indian musical motives and coded with Common Music/Stella, and electric guitar improvisation which has been signal processed. Interspersed between these sections of performance poetry with recorded backgrounds, each portion of text is also set for soprano over written compositional material based on jazz chord progressions for synthesizer or piano, electric or acoustic guitar, and celletto or cello.
An interactive piece for Classical Guitar, NeXT (Physically Modeled SynthBuilder Flute), Macintosh (sequencer). Performed at Stanford University, February 4, 1996.
Composed for stereo tape using samples processed in CLM, and note lists generated by Common Music, mixed using Paul Lansky's RT Mixer app. Performed at Stanford University; Thessaloniki, Greece; Buenos Aires, Argentina, 1995, Belfast, Northern Ireland 1996.
Michael Edwards and Marco Trevisani
segmentation fault beta 1.0 is a composition for prepared and digitally processed piano, and computer mixed sound files. It uses software (artimix) written by Michael Edwards to trigger and mix sound files stored on hard disk. With this software, sounds are mapped to the keys of the computer keyboard and triggered at will during the performance. Each sound can also be mapped to a specific MIDI channel so that individual gain control can be applied to each sound in the mix through the use of a MIDI fader box. The computer part therefore consists of triggering prepared sounds and controlling their relative amplitudes. This piece is a collaboration between the two performers (Marco Trevisani, prepared piano, Michael Edwards, computer), both of whom are composers. The sounds used were created by the composers using Common Lisp Music, written by Bill Schottstaedt at Stanford University. They were realised with sample processing and manipulation of sounds from various sources, including piano, prepared piano and cello, as well as through direct synthesis using Frequency Modulation techniques. The piece was ``upgraded'' at the end of 1996 to segmentation fault beta 1.1 and was performed at the Opus 415, No. 2 music festival in San Francisco. A multi-track studio recording was made in the summer of 1997.
Part of an interactive performance environment employing movement, sound and sculptural forms, performed at Stanford's Memorial Auditorium and created at CCRMA; the result of a grant received from a consortium of Stanford arts faculty. Featuring a reading by vocalist and CCRMA-associate Emily Bezar, the sound design focused on the creation of soundscapes through the computer processing of previous readings of a composed text, and the real-time processing of both readings of the same text in performance, and other aspects of the performance environment. An early version of SynthBuilder was an essential element of the final performance configuration.
Cosmos is a composition for electronic synthesizers, radio baton, and computer. Designed for live, solo performance, the radio baton and computer keyboard are used as controllers in conjunction with custom-built software running on an Apple Power Macintosh computer. Using analog, frequency modulation, and sample playback synthesizers, the composition achieves gradual yet dramatic transformations of timbre and intensity which are activated by movements with the radio baton. Cosmos received its premiere at the CCRMA Summer concert in July 1997.
The title refers to a character (or hexagram) from the I Ching (the Chinese Book of Changes), whose meaning is concerned with the way subtle forces-over a prolonged period-can often have a powerful and penetrating effect. The results of which, although "less noticeable than those won by surprise attack, are more enduring and more complete". Composed in 1991 (but extensively revised in 1995), The Gentle is the first of what became a series of three pieces for 3 female voices, all of which being written for the group, Scottish Voices, directed by Graham Hair. Musically, the `subtle forces' at work are the pure sound of female voices and vibraphone, and the repeated phrases (one for each of the hexagram's 6 lines) which gradually transform, one into the next. There are certainly no `surprise attacks', and the music is meant to conjure up a mood appropriate to its title.
The installation piece received its premiere at an outdoor location - the 18th-century formal gardens of Greenbank House in Glasgow, Scotland, April 1995. The 4 soundpieces (with a combined duration of over 60 mins.) were composed during a 3-month period of intensive work from January to April 1995.
Awakening invites the listener on a journey through sound. From a state of initial dormancy to the realisation of some ultimate goal, the four component soundpieces represent stages along the way. Non-realtime additive and FM synthesis, and effects processing of sampled sound (using the software package CLM (Common Lisp Music)), created much of the material for Slumber. Algorithmic methods (using the software package Common Music) in which the parametric values of events (eg. pitch, amplitude, as well as timbral details) were determined according to their position in a metrical hierarchy, generated the patterns used in First Steps - these triggering sounds on a Yamaha TG-77 synthesizer. Quest makes use of more-or-less untreated but unusual sampled sound sources (eg. recordings made inside a large empty drinking water container produced the low percussive sounds; and the rustling of a large piece of hardboard produced the percussive sound employed as a cross rhythm) and was assembled entirely using a MIDI sequencer and sampler. For Confluence, granulated, time-stretched, and time-compressed water samples were layered to form a slowly evolving sound texture.
The composition takes its title from a character (or hexagram) in the I Ching (the Chinese Book of Changes). The Well is concerned with the timelessness of existence, and is said to contain its entire history within itself. The Well is the"unchanging within change"; the constant around which all else is in a state of flux.The piece was commissioned by and composed at the University of Glasgow in 1992, and is the second in a series of three pieces for 3 female voices, all of which being written for the group, Scottish Voices, directed by Graham Hair.The tape part consists of the layering of several strands of looped patterns, which eventually form a dense and `watery' texture. Weaving around each other, the voices rise in pitch and intensity, emphatically expressing a single idea-the meaning of which can be felt, but not conveyed in words (hence the absence of text).For the tape part, material was generated using algorithmic composition techniques, and the specific loops were arrived at after a careful process of selection and editing.The sounds were largely created on a basic FM synthesizer, the Yamaha TX-81Z.
The piece completes a series of three pieces for 3 female voices,all of which having been written for the group, Scottish Voices (directed by Graham Hair). Like its companion pieces, The Joyous is based on its eponymous character(hexagram) from the I Ching (the Chinese Book of Changes),the qualities of which in this case can be described as inner strength and firmness within, combined with acquiescence and softness without.The piece begins softly with a (hocket-like) pattern spread across the 3 voices, accompanied by (scale-like) figures of changing phrase lengths on the harp. The material is subjected to a variety of transformations, involving pitch, meter, rhythm,and mode, but proves ultimately to be unbreakable as its identity remains intact throughout. This musical journey (process) hopefully depicts in sound, some of the attributes embodied by The Joyous.
David A. Jaffe
Carl Sagan challenged and inspired a generation to consider a universe not made for us, to look beyond our planet, and at the same time to recognize its fragility and preciousness. He played a leading role in space exploration, planetary science, the study of the origins of life and the hunt for radio signals from extra-terrestrial civilizations. I attended a series of lectures by Sagan at Cornell University in the early 70s and have been a fan ever since. In Other Worlds, I have tried to paint in sound a vista such as might be seen by the shores of the nitrogen lakes of Triton, freshly covered with methane snow and irradiated into the material of life.
Other Worlds was commissiond by the 1998 International Computer Music Conference and the University of Michigan, and premiered at the conference. Andrew Jennings was the violin soloist, H. Robert Reynolds was the conductor of the University of Michigan symphonic band. The piece was also presented at the 1999 SEAMUS conference in San Jose.
A seventy-minute concerto in seven ten-minute movements for Boie-Mathews Radio Drum-controlled Disklavier.
Instrumentation: Grand piano and ensemble of plucked string and percussion instruments: mandolin, guitar, harp, harpsichord, bass, 2 percussionists, harmonium.
The piece recieved its world premiere by the San Francisco Contemporary Music Players in February, 1998 at the Yerba Buena Theatre in San Francisco. The San Francisco Chronicle described it as ``a splendidly kaleidoscopic series of sketches, by turns exuberant, contemplative and austere.'' Work on the piece was supported in part by a Collaborative Composer Fellowship from the National Endowment for the Arts and the Banff Centre for the Arts, Canada. The Seven Wonders of the Ancient World was released on CD in 1996 on the Well-Tempered Productions label and was given an A+ rating by Audio magazine.
Christopher Wendell Jones
The title of this piece reflects its structural nature. Matragn is an anagram of the word ``Tangram,'' a Chinese puzzle. The puzzle is solved by arranging seven simple shapes in a multitude of configurations to create new, more complex forms. Like the puzzle, Matragn consists of simple elements which are perpetually reordered and reinvented.
Matragn was written for clarinetist/composer/improviser Matt Ingalls, whose improvisations provided the sonic core for the electronic part. Special thanks also to Chris Burns and Juan Pampin for their technical advice and support.
Instábilis represents a extreme in the spectrum of compositional techniques based on the ecological perspective. This piece proposes a finite sonic space delimited by the resonant characteristics of a single body: a metalic sculpture. As an analogy to the visual objects exposed in the installation, the sculpture is explored from different angles creating the timbre space of the piece.
The temporal structure - camparable to a kaleidoscope - is ramdom, modular, and constantly varying. Two stereo sources with similar sonic material reflect the symmetry of the space. Nevertheless, as it is the case in environmental sound, the sonic events - with a total duration of twenty minutes - contain no literal repetitions. Given this aleatoric structure, each listener experiences a unique version of the piece.
During 2001, Instábilis has been presented at Espacio Cultural Citibank, Asunción, Paraguay, and Galeria Athos Vulcão, Brasília, Brazil. This piece was realized at the studios of Universidade de Brasília.
The sound material for Dorotéia was organized using three sound classes which served as perceptual axes for structuring the piece: vocal sounds, water drops and hybrid metal, water and vocal events. Sonic transformations consisted in temporal and spectral transitions between these classes.
The combination of a granular control data structure with constrained sonic databases allowed us to synthesize hybrid events, featuring characteristics of different sound classes. The extraction of events from ambient recordings provided a way to relate the sources with their placement in the space. The spatial layout consisted of three separate stereo tracks distributed in the performance space. Outdoor recordings were utilized for open spaces and reverberation by convolution was employed for enclosed spaces.
During the performance, the sound track consisting of water drops was played continuously, the vocal track established a dialogue with the actress, and the complex events were played at key moments in the piece.
This work was funded by the Brazilian Student Association of Stanford University. Presented on May 3, 4, and 5, 2001, during the Brazilian Week at the Elliot Center, Stanford University, CA.
La Conquista addresses the theme of the Spanish conquest and the dynamics of power relationships between aboriginals and conquerors. This piece was based on the soundtrack 'Oro por Baratijas' [Compact Disc Organised Sound 5(3)].
The sonic composition provided a template for the video editing process. This was done on a Dell computer, using Premiere 5.0. The raw footage was recorded with a Sony camera on Digital 8 format. The four natural elements, earth, water, fire and air, were used as basic sources to create leitmotifs that reflect the forces of nature and the dynamics of social interactions.
La Conquista has been presented in Boulder, Denver, Boston, New York, Buenos Aires and Havana.
Current capitalist society functions on the basis of circulation and accumulation of goods. IQ is also shaped after these two processes. The number and behavior of people in the space define the number and characteristics of the events being triggered. Thus, IQ could be compared to an organism that reacts to human presence. Because the triggering process is based upon the sensors' change of state, IQ is not excited by too many stimuli. In fact, if all sensors are constantly active, no event is triggered. IQ's ``quiet" state follows an eighty-hour cycle. As long as it is not disturbed, the piece will repeat itself every three days and eight hours.
The material in IQ consists of a few sonic grains and video images. Thus, the local elements of the piece are very static, almost immobile. As it is the case in human social systems, the most relevant structures take place at a higher level. They are determined by relationships among a large number of elements and by processes which unfold very slowly.
IQ takes shape at the junction between time and space. People's behaviors in IQ's space establish what events occur and how they are distributed through time.
IQ was presented in September 2000 at the Industrial Ear Exhibition, in the Ironworks Gallery, Vancouver, Canada. This work was funded by and realized at the Western Front Artist-Run Center, Vancouver.
Metrophonie, as its title suggests, is loosely inspired on Stockhausen's Mikrophonie. The similarities are: the title, the four-channel format, the transformation of recorded sounds and the subdivision into sections, in other words, all that doesn't matter.
Metrophonie takes place in San Francisco's metro, (aka BART). A woman pays a fare; a drunkard sings; a big man hugs his buddy; and a ``mama works at Brian's tonight." As in San Francisco's street life, money is very important here. But more important is the lack of money.
Some streets in San Francisco are not clean. Some people in these streets are also not very clean. The sounds these people make are rich. That's why it doesn't matter whether they are dirty or clean, whether they are black or white. By the way, what time is the next blackout?
The The Urban Corridor was presented from June to August 2000 at the CU Art Galleries, Boulder, within the context of the Electronic Easel exhibition. The installation consisted of a constructed space shaped as a corridor containing lights, motion sensors, two slide projectors, a video projector, and a multichannel sound system. The whole setup was run from a Macintosh PPC computer equipped with two CD-ROMs and an x10 two-way interface.
This installation is a collage of a video projection with sound compositions. The sounds elicit images of the urban environment. There is a soundtrack for the video and several sets of independent sounds which are triggered by the visitors. These sounds are connected to a computer with two CD-ROMS that are randomly activated. There is another set of environmental sounds coming from a CD player which constantly play. The video interweaves four 'news' events: an automobile accident, a public demonstration, urban sprawl, and a war. These events are designed to encompass the audience perception from a global (i.e. a war) to an individual experience (i.e. car wreck). The news expose the process by which real occurrences become a mediated reality. The video utilizes a combination of found, original footage and voice-overs. We create a virtual cityscape with a large video projection and surround sound which envelope the viewer. This is an interactive piece. As the participant enters the room, he/she triggers the video projection and audio sound. As the participant walks around the room, he/she turns on motion detectors which trigger the sounds.
touch'n'go / toco y me voy is a modular text and music work. Each section is a complete, self-contained piece which shares material and sound references with the other sections. The piece is published as an enhanced CD by earsay productions http://www.earsay.com. The CD includes sounds, text, and the algorithms used to create the piece. The text, in HTML format, is available at http://www.sfu.ca/~dkeller.
Sections of touch'n'go have been played in Vancouver, Bourges, Stanford, and on Zagreb Radio.
Eum-Yang is a composition for Disklavier, sampled and computer-modified violin sounds, and Celleto. The Disklavier and violin sounds are controlled by Radio-Baton through the PadMaster program using a NeXT computer. Two digital mixing processors (DMP-11) are also linked to the Radio-Baton to control the quadraphonic sound system.
Eum-Yang, in chinese pronunciation Yin-Yang, is an old oriental philosophy. ``Eum'' means dark and cold, while ``Yang'' means bright and hot. In music, these constrasts and polarity can be expressed in many ways: Color of harmony (dark and bright), Level of pitches (low and high), Level of loudness (soft and loud), and speed of rhythm (fast and slow).
The symbol of Eum-Yang, called Taeguk, is divided into two Yee (Divine Gender), which are in turn divided into four Sang (Divine Phase). The four Sang are divided into eight Kweh (Divine Diagram). Each of these eight Kewh has a meaningful names which are four polaric pairs: Sky and Earth, Pond and Mountain, Fire and Water, and Thunder and Wind. The piece contains twelve sections which are eight sections of each of above and four sections of each of those four pairs, which is a kind of recapitulation.
Dreaming is written for solo viola and computer generated tape sound. Its single movement consists of three sections. The first section can be described as Dreaming to Actuality; the second Actuality (viola solo); and finally a return to Dreaming. The source of tape sounds is entirely from an acoustic viola played by Keith Chapin, for whom the piece is written. The sound was processed using CLM (Common Lisp Music) on a NeXT computer.
ZephyrBells is a composition for quadraphonic sound created using CLM (Common Lisp Music), SoundWorks and rt.app on the NeXT computer. I only used one sound source which is a synthetic bell sound for this piece. The basic idea is that we can hear the bell sounds from afar by the zephyr winds.
For two sopranos, percussion and computer processed sounds on tape, using CLM, SynthBuilder, SoundWorks and RT on a NeXT computer. The piece is performed in the dark under five candlelights. The idea of this piece is based on reverbing sound effect.
Live Stereo Sound Processing WP: CCRMA 11th Annual Industrial Affiliates Meeting, May 21-23, 1997, Stanford.
Protozoo is a real-time, generative composition that creates a sequential, variative form solely as a result of a system of few basic audio processing operations. As such, it forms an acoustical analogon to dynamical systems often found in phenomena like chemical reactions, population growth, or models of processes in ecosystems. The listener is presented with a ``zoo'' of acoustical pre- and near-''life forms'', simplistic, yet complex organisms, some of which develop to be more stable than others. Biological concepts like activation, inhibition, growth and death, transformation, digestion, inheritance and evolution come to mind and are helpful for the understanding of the composition.
Realizations of Protozoo may be either in the form of a sound installation, an interactive instrument (using MIDI controllers), or as an effects processor.
See Also: http://ccrma.stanford.edu/~tkunze/mus
This piece was commissioned by NoTAM for the GRM Acousmonium sound system. The material is derived solely from three old native Japanese instruments: flute, metal-clock, and drum - and then rigidly processed by custom made DSP-applications I wrote exclusively for the piece.
For computer-generated tape. The piece was commissioned by the Norwegian Contemporary Music Organization and completed after two years of work. It was premiered in Oslo, also performed at Stanford, 1995.
The compositional technique is entirely based on digital signal processing. Several DSP applications in the C-programming language have been written exclusively for this piece, i.e. no commercial application has been used. Downcast serves as a presentation of these programs as well as a demonstration of a rather modern compositional technique which is a spinoff from the idea of using a general computer language (its code) as the musical notation.
The initial audio material for the piece is derived entirely from a recording of a female voice - throughout the piece, this voice is rigorously processed by the computer programs. The original sample,a recording of short laughter, can be heard in at the very end of the piece. Complex rhythmical syncopation is a crucial component for the composition. At times there are up to one thousand layers where the melody line jumps from one layer to another following the pattern of these syncopations. Elements such as dynamics and spatiality are also fundamental to the piece. Reverberant spaces are derived from actual physical rooms in the CCRMA-building (everything from the smallest closet to a large auditorium have been used for reverb impulse-responses). The convolution of those room-responses are combined into layers and used in the style of classical counterpoint. Since the composition is entirely processed and edited in the digital domain (no analog to digital converters have been used) the sound is significantly clean with a very high dynamic range.
Dante's Inferno is a collaborative work with Fellow Travelers Performance Group, making use of 4 dancers, video images, installations, and electro-acoustic music. It will be performed at the ODC Theater, San Francisco, CA in September 2004.
Throughout history, humans have attempted to decipher what comes after this life through myth and religion, guesses, stories passed down, and rational thought. Ancient myths offer ideas as do more modern writers from Hamlet's soliloquy, to George Bernard Shaw and Sartre. Fear of death and, through this, fear of evaluating one's life have haunted and inspired people through the ages. Perhaps none took it so far as Dante who mixed local politics, religion and Aristotelian ethics, created an afterlife and went for a visit.
Dante's Inferno provides a wide backdrop of history by which to examine life today. The many layered splendors of hell hold many surprises, including the number of Greek and Roman gods and mythological creatures which parallel the superstitions, rationalizations and allowances made in today's supposedly rational society. Rather than trying to create a stage copy of the Inferno, the collaborators are seeking to look at societal issues of today through the looking glass of the past. Many of the excesses of medieval Florence are the excesses of today: sex, greed, power.
Through the Inferno Project (working title) Fellow Travelers Performance Group's Artistic Directors Ken James and Cynthia Adams (choreographers of dance and performance), Matthew Antaky (scenic, costume and lighting design), Lawrence LaBianca (sculptor) and Seungyon-Seny Lee (electronic music composer), will create a new series of interdisciplinary works. Designed to create a sense of enigma, like a feeling of peril in which the elements and textures of the experience shift, slide and recombine unpredictably, we are striving to create a synthetic response where all parts of the environment stimulate full body and mental reactions in the audience.
There are four basic layers to Helix; a monologue on tape, tape sounds that utilize non-verbal voice and instrumental sounds (transformed via computer), voice and video. In January, February, and March of 2001, I kept a diary of thoughts on unanswerable questions and endless conundrums. The issues of reality and imagination and their relationship to the Self and Other appeared at times. The monologue part on the tape is drawn from my diary, and it incorporates these elements. The other tape sounds explore the world of explication, while the voice part is more implicit in substance. The video image is meant to enhance and redefine the experience of the sonic realm of Helix. Many thanks to Cyrille Brissot who collaborated for video images.
13 is a real-time interactive collaboration work by the composer, Seny Lee and the scientist, Jeffrey Walters. Craig Sapp provided invaluable assistance with the hardware interfaces.
13 is one of composers' imagined numbers - one step beyond the end of a common cycle. Simple additions leading to ``13'' create short episodes of sound. The desired additions are expressed by grasping the plastic panel on the numbered sensors. Vibration sensors are also placed on the panels, and the vibrations caused by the users' interaction produce another layer of sound. The mirrored surfaces project desired and undesired self images, distorted in ways reflected by the sounds.
Silo Installation had an exhibition on the silo container at Ghent, NY, and it is a real-time interactive work which is based on the idea of 13 installation.
FSR sensors are used to randomly trigger pre-recorded sounds, allowing 48 different combinations of sound caused by the users' interaction which produces another layer of live processing sound.
Sang-Yeo-So-Ri roughly means ``bier-carriers song''. When someone dies in Korea, it is a tradition that men carry a colorfully decorated bier on their shoulders while walking towards to the grave. A man leads the singing accompanied with a hand bell or a drum, and the bier-carriers sing a refrain following the tune. People believe the song keeps the dead safe until he gets to Heaven. Using granular synthesis, the tune resembles echoes from the higher world.
In a sense of color, how much blue can be made the tone of dark-grass-green with yellow, and how much of the same blue can be made yellowish green with the same yellow? What makes it possible to think in certain ways, and to not think in other ways?
Psychoanalytic issues drove the composer to portray the individual mirror image of Self and Other through the fundamental emotions of human being, which include at least these four: Pleasure, Anger, Lament, and Joie (Hee-Ro-Ae-Rok) in the piece, Idiosyncrasy. The composition draws on three poems in three different languages, both using the essential meaning of the words and liberating their phonetics from the lexical hindrances of a given time and place. Many thanks for Cyrille Brissot who collaborates for video images, and also for Takayuki Nakano who lets me use a part of his poem.
Je est un Autre is a journey of the imagination and an aesthetic evolution of its ingredients. The cycle consists of four pieces, each composed for different media. Electro-acoustic music and visual images are combined in the first piece, dance, video images, installations, and computer generated sound in the second, instrumental sextet and theater in the third, and a sound installation with ceramics in the fourth.
The raw acoustic materials were recorded of the sounds from nature, such as running water, shaking rice, insects, etc. These sounds were electronically processed, creating a broad palette of timbres and sonic textures. The sound transformations in the cycle, Je est un Autre are used to develop diverse musical layers, inviting the listener to interpret the music through his own imagination.
Throughout the pieces, imagination, as a metaphor for the unconscious, signifies the existence, which struggles within an ego in memory and in reality, from nil to eternity. The three stages - the Real (need), the Imaginary (demand), and the Symbolic (desire) - that I represent in the pieces come from a notion of Jacques Lacan.
In Je est un Autre I, a fish tank is placed in front of the video projector. The shadow effect of rippling water delivers images, which refer to the unconscious as the foundation of all being. Imagery for the piece is included animation and photograph images, which were chosen for their symbolic reference to internal psychological states.
Dance in Je est un Autre II is used continue the exploration of concepts originally presented Je est un Autre I. The three dancers roughly signify the three phases of becoming an individuated being, as theorized by Jacque Lacan; the Real (need), the Imaginary (demand), and the Symbolic (desire). Choreography was influenced by `ABECEDA' by Karel Teige and Vítezslav Nezval (1926). The gestures of the dancers depict letters of the alphabet, spelling out terminology used by Lacan in French.
The installation is intended to convey aspects of Lacan's linguistic structure of discourse. The three panels used as props by the dancers represent the phases of the linguistic process. Discourse originates in the unconscious (represented by the plastic sheet). The abstract form of the idea filters through the memory (the transparent scrim), and is formulated as language (the newspaper sheet). The imaginary phase of the unconscious is further represented in the piece by projected images. The photographs of the abstract images were chosen for their symbolic references to internal psychological states.
Thanks to Juan Pampin for the use of his software, ATS (spectral Analysis, Transformation, and Synthesis of sounds) and Kotoka Suzuki for editing the video.
Chuk-won is based on Samul nori which is a traditional from of Korean percussion music. Samul means ``four things'' in English and nori means ``performing''. The ensemble's members consist of two skins and two metals. The instruments symbolize earth (skins) and the heavens (metal). The instruments are identified with a constantly changing natural world. The metal instruments represent (1) Spring/lightening/thunder and (2)Summer/wind. The skin instruments represent (1) Autumn/rain and (2)Winter/clouds. It is said that if people play on these four instruments together, the resulting vibrations will harmonize earth and heaven into one universe. Sounds for this piece originate from recordings of skin and metal instruments used in the performance of Samul nori. During this performance, three video projectors display images that metaphorically combine with the music to reflect on the unity of creation. This piece forms the third part of a four-movement composition entitled Chuckwon which roughly translates as ``invocation''. This movement consists of electronic sounds only - other movements include a percussion quartet as well.
Fernando Lopez Lezcano
``...come, travel with me through the House of Mirrors, the one outside me and the one within. Run, fly, never stop ... never think about being lost in the maze of illusions, or you will be. Glide with me through rooms, doors and corridors, surfing on tides of time, looking for that universe left behind an eternity ago. Listen to the distorted steps, the shimmering vibrations that reflect in the darkness, watch out for the rooms with clocks where time withers and stops ...'' fll.
House of Mirrors is an improvisational tour through a musical form and a four channel sound environment created by the composer/performer Fernando Lopez-Lezcano. The sound of doors opening and closing define the transitions between rooms, corridors and open spaces, where soundfile playback and midi controlled synthesis mix to create different atmospheres sharing a common thread of pitches, intensities and timbres. The journey through the House of Mirrors is controlled in real time through an interactive improvisation software package - PadMaster - developed by the composer over the past three years. The Mathews/Boie Radio Drum is the three dimensional controller that conveys the performer's gestures to PadMaster. The surface of the Radio Drum is split by PadMaster into virtual pads, each one individually programmable to react to baton hits and gestures, each one a small part of the musical puzzle that unravels through the performance. Hits can play soundfiles, notes, phrases or can create or destroy musical performers. Each active pad is always "listening" to the position of the batons in 3D space and translating the movements (if programmed to do so) into MIDI continuous control messages that are merged with the stream of notes being played. The virtual pads are arranged in sets or scenes that represent sections of the piece. As it unfolds, the behavior of the surface is constantly redefined by the performer as he moves through the predefined scenes. The performance of House of Mirrors oscillates between the rigid world of determinism as represented by the scores or soundfiles contained in each pad, and the freedom of improvisation the performer/composer has in arranging those tiles of music in time and space.
This piece is about impossible dreams. Many times and without learning from experience we build beautiful Paper Castles on Invisible Clouds, thinking yet again that dreams are reality, or maybe that they can be turned into reality with sheer will power and a magical wand. This sections are like twin brothers, intermingled yet separate. As for the last section, Electric Eyes, if one has ever felt the startling contact of electric eyes, there is no need for the composer to explain. If one has not, mere words will never be enough. That's the composer's dream and the cause of a lot of paper castle building...
The piece was composed in the digital domain using the CLM non real time sound synthesis and processing environment running on a NeXT and the four channel spatialization was performed by a special unit generator programmed by the composer. The original sound materials are sampled tubular bells, cowbells, cymbals, gongs, knives and screams and quite simple additive synthesis instruments. The first part was composed while the author was working in Japan at the Computer Music Laboratory of Keio University. It was latter finished at CCRMA.
Espresso Machine II is the second incarnation of the first piece that uses PadMaster, a new improvisation / performance environment built around the Mathew/Boie Radio Drum and written by the composer on a workstation running the NextStep operating system and a live electronic cello player (Chris Chafe playing his electronic Celletto). PadMaster is written in Objective-C and uses the MusicKit classes as the basic foundation for MIDI control and sequence playback. The Radio Drum interfaces with the NeXT through a custom MIDI protocol and is used to trigger and control isolated events and event sequences in real time. PadMaster splits the drum surface into programmable virtual pads that can be grouped in sets or "scenes", which in turn represent different behavioral patterns for the different sections of the piece.
Espresso Machine is an evolving dialog between the acoustic / electronic sounds of the Celletto and the contrasting timbres played by the composer on two TG77 synthesizer modules through the PadMaster program controlled by the Radio Drum. PadMaster essentially provides several pallettes of pre-built elements that are combined and controlled in real time to generated an electronic soundscape for the Celletto performance.
"Knock Knock... anybody there?" is an extension to four channels of the original stereo sound track that was composed for a collaboration project with visual artists in 1994. Willie Scholten and Ruth Eckland provided the sculptures and visual framework while this piece served as the sound environment for the installation. The music explores altered states of consciousness and in particular insanity, in a journey through a three dimensional soundscape where voices and sounds evoke multiple and conflicting states of mind. All the concrete sound materials used in the piece were gathered during a small meeting with friends where the central topic that motivated the project was freely discussed. From the digital recording small but significant fragments of the conversation were extracted and subsequently processed in the digital domain using CLM instruments (CLM, Common Lisp Music, a non real-time Lisp based software synthesis and processing environment). The processing included dynamic spacialization of multiple moving sources rendered for a four channel reproduction environment. The listener moves through the soundscape while voices and sounds tell several overlapping stories that might occur in the hazy border between sanity and insanity. The piece even includes materials from the piano jam session that happened at the end of the meeting...
"in a room - with room to grow - the fabric of space is floating veils, curtains and webs... alabaster light and tides of time play with them. Grandma is sitting in a rocking chair... she looks at me, smiles, and keeps knitting an infinite tapestry of gifts"
This is a solo piece for PadMaster (a real-time improvisation software package written by the composer), Radio Drum and MIDI synthesizers. PadMaster uses the Mathews/Boie Radio Drum as a three-dimensional MIDI controller. The function of the batons and the behavior of the surface of the drum are controlled through PadMaster and create a set of sonic soundscapes through which the performer chooses a path.
For large symphonic orchestra.
For Perry Cook's collection of shells.
For solo percussionist with vibraphone and marimba.
Strata 2 is a study in obscuring and defining harmonic motion, obstructing and establishing rhythmic pulse, animating surface detail, and signal processing with modulation techniques.
The piece is divided into four sections, with an additional introduction and two brief interludes. Each section is further divided into seven subsections, each of which are based on one of three harmonies, eight- and nine-pitch groups, which extend through the range of the flute. The four sections move from obscured to defined harmonic motion, through the use of greater or fewer auxiliary pitches, which revolve around the primary pitches of the harmonies.
These sections also move from obstructed to established rhythmic pulse, through the use of greater or fewer rhythmic interruptions and grace notes, and expansion and contraction of sustained notes. The sustained notes are animated with trills and vibratos of three different speeds, flutter tongues, and sung pitches, which create interference with the timbre of the flute. The timbre of the flute is further processed with computer programming, using amplitude- and ring-modulation, and spatialized around four speakers.
Interpose is a study in the interposition of gestural content on the local and structural level. Materials are introduced on their own and then incorporated into the overall texture, or taken from the texture and elaborated upon within their own sections.
The pitch content is taken from rotations and transpositions of a row built from trichords and tetrachords, which themselves are the basis for the harmonic motion of the piece. The row also serves as a skeletal pitch structure for the piece, providing the pitch levels for each section.
The tape part serves as a timbral extension of the guitar part, as if the resonance of the guitar is being transformed. The timbres of the tape part were created with instruments written in Common Lisp Music which use a hybrid approach to additive synthesis.
Building on the long tradition of additive synthesis, various conventional synthesis techniques are used to resynthesize the individual partials of an analyzed sound, which are added to produce the resynthesized sound. The frequencies and amplitudes of the individual partials of an analyzed sound are converted to percentages of the fundamental frequency. Then the frequencies and amplitudes of various types of unit generators are set to these values and added to create a spectrum related to the original sound source, but exhibiting the distinct characteristics of the chosen synthesis technique. In addition to sine wave resynthesis, frequency modulation, formant filtered pulse train subtractive synthesis, and Karplus-Strong plucked string physical modeling instruments are used to generate each partial of the resynthesized sound, producing a pure steady-state, spread, scattered, and plucked timbre, respectively. Furthermore, the frequency, amplitude, panning, distance, and reverb of each synthesis instrument are controlled by two envelopes: one which dictates the global behavior for each musical parameter and another which determines the scaling of the global behavior over the range of partials, providing global control over the musical behavior for each partial of the resynthesized sound.
Regulate Six is a study in granular synthesis. Samples were taken from recordings of male and female voices singing a line from a children's book, and were reassembled using Bill Schottstaedt's Common Lisp Music to create a new waveform whose spectrum is based on the selected vowel or consonant content of each word. Within the computer-generated sound files, pitches are grouped according to timbral types and sweep across or converge at points along the stereo field. The MIDI violin triggers an array of samples, which are similar in timbre to the background material, performing real-time granulation on the samples through the use of trills and tremolos. The violin's MIDI pitch is often harmonized through MAX programming, which is controlled by a foot pedal. The pedal also triggers the start of each sound file.
Even on crowded roads every person is their own little oasis in a sea of traffic. Often times while being stuck in traffic one cannot physically escape the present circumstances. However, the mind is free to roam and imagine. The three movements of Traffic Islander mirror traffic patterns observed in everyday life.
What does it sound like when kittens dance across a piano? When they waltz up and down the keys? This composition attempts to emulate the pitter-patter of little paws dancing across the piano. The arrangement of the nine toy pianos allows the kittens to spatially dance around the room. Kitty Waltz was written especially for the Klavier Nonette installation at the New Music Gallery, Seattle WA.
This text-sound music piece is based on a text about the return of a ``new legitimate" Napster as a subscription music service.
For alto saxophone, cello, percussion and tape. While traveling - and wanting to remember the experience - photographs are usually taken of people and places. Not being the most diligent about getting photos developed, several trips usually get mixed together. Snapshots on a Circle is an aural collage of the moods and interactions of the people and places where they occurred in the photographs.
The title, Snapshots on a Circle, has a double meaning . The first is more literal in the sense that several of the photographs were taken during an extended lunch at a cafe on a plaza. The second is more universal in that most travels, no matter how long or how far, eventually wind their way back to their point of origin.
The tape portion of this piece was realized by sampling everyday environmental sounds and then processing them in CLM and SoundDesigner II on an Apple PowerPC. They were then compiled on the Dyaxis II using MultiMix 2.3.
For tape. What is a question? How are questions formulated? As the mind wrestles to grasp the concept of a subject, inevitably, questions begin to form . But not all questions are created equal. Some may be ill-conceived and make no sense, resulting in more confusion. Some well thought out questions, once asked, can be enlightening but raise yet further questions. Some are really not questions at all, but simply a reiteration of the subject in the inquirer's own words in an attempt to understand. Sometimes frustration ensues, and the inquiry must be reapproached. From thought to vocalization this piece explores the musical texture of a question.
This piece was realized through the use of granular synthesis, spectral reshaping and the resampling of vocal samples and computerized instruments. All signal processing was done on a Apple PowerPC.
For computer-generated stereo tape.
Juan Carlos Pampin
The pleasure of space: This cannot be put into
words, it is unspoken. Approximately: it is a form of
experience -the "presence of absence"; exhilarating
differences between the plane and the cavern, between
the street and your living-room; the symmetries and
dissymmetries emphasizing the spatial properties of my
body: right and left, up and down. Taken to its extreme,
the pleasure of space leans toward the poetics of the unconscious, to the edge of madness.
Bernard Tschumi, Architecture and Disjunction
On Space reflects on the texts and ideas of a painter, a writer and an architect that shaped Art over the last century.
In his transcendental book "On the Spiritual in Art" (1910), Wassily Kandinsky wrote:
Kandinsky's ideas, especially those of space and expression, made their way into the piece, embodied as sound trajectories in space that behave as points and lines to plane.
Related to the form of the piece is a text by Borges: La muerte y la brjula (1942). Along the pages of this fascinating story, a detective (Erik Lnnrot, ``an Auguste Dupin'' of detectives) finds his own destiny within an infinite labyrinth that is his own city. A series of mysterious deaths equidistant in time and space are the clues that help him find his own death at Triste-le-Roy (south of a mythical Buenos Aires). The music of On Space is deployed in different spaces that are all perspectives of the same urban landscape from the four cardinal points (North, West, East, South). As in the text, the same things are replicated ad infinitum, and the idea that we only need three points to find a fourth becomes obsessive.
Years before designing the folies for La Villette in Paris, Bernard Tschumi wrote in his essay Questions of Space (1974):
In On Space, percussion and electronics are combined to sculpt sound in space, somehow trying to answer these questions in terms of sound poetry. The program for the piece was developed as a dynamic urban design where each section is constructed to show a virtual perspective from different vanishing points.
On Space closes a cycle of pieces that explores the materiality of percussion sound: metal (Metal Hurlant, 1996), wood (Toco Madera, 1997), and skins (Skin Heads, 1998). On Space uses the sound materials created in all these works to shape space as a continuous matter, capable of inflexions and changes.
This piece has been comissioned by ``Les Percussions de Strasbourg'' and GRAME for the openning of the ``Musiques en Scene'' festival 2000 in Lyon, France.
Skin Heads is for percussion trio and computer generated sounds. Skin heads are flat, usually covering an empty space, just a volume of air. Any resemblance with those that you might cross in the streets of Berlin is mere coincidence. Skin heads resonate, becoming the living body of other instruments, altering their sound or even magnifying their presence. Skin Heads, for percussion skins trio and electronics, is based on these resonances (skin percussion instruments), explored and transformed both by electronic and acoustic means. Skin Heads is the third piece of a cycle written for each family of percussion instruments and electronics. The first two works of the cycle are Metal Hurlant (1996), for metallic percussion (solo), and Toco Madera (1997) for wooden percussion (two players), both premiered at Stanford. This cycle will be completed with a percussion quartet combining all the instrumental palette.
Technical note: The spectral analysis and transformations of the sampled percussion instruments were done using ATS, spectral modeling software progrmmed by me in Lisp. All the digital sound processing and synthesis for the piece was performed with Common Lisp Music, developed at CCRMA by Bill Schottstaedt.
North of San Francisco, near Point Arena, the sea transforms the beach into a beautiful, constantly evolving mile long sculpture. On the beach hundreds of wood logs are washed onto the coast by the Pacific Ocean. I discovered this sculpture (or is it an installation?) while beginning work on Toco Madera. The dense textures created by drift wood of all sizes inspired the form and process of the piece. I realized that my compositional work had to be similar to the role of the sea, which not only placed the objects in textural combinations, but transformed their surfaces and matter to create new complex morphologies.
I sculpted new sounds with the computer from a set of nine wooden percussion instruments recorded in the studio. I wanted to keep the rustic quality of wood sounds, to operate on them respecting their soul. This task was achieved using spectral analysis of the instrumental sounds to extrapolate their salient acoustic qualities, and digital filters to carve their matter. Throughout the piece, these transfigured wood sounds interact with the original instrumental set, performed by two percussion players, to create a multilayered musical space that reflects the textural traits of the natural wooden sculpture.
Toco Madera is the second of a cycle of percussion works exploring what philosopher Valentin Ferdinan calls ``materiality'' of sound. For this work (as for Metal Hurlant, the first piece of this cycle) a qualitative logic that guided the compositional process was inferred from the acoustic structure of the material used. In Toco Madera music becomes the expression of wood.
The analysis and spectral transformations of the instruments were done using ATS. All the digital signal processing for the piece was performed with Bill Schottstaedt's Common Lisp Music.
``They know that a system
is nothing more than
of all aspects of the universe
to any one such aspect.''
- from Tlon, Uqbar, Orbis Tertius by Jorge Luis Borges
Sound. Sound as a metaphor of life, as a living entity that gets transformed with us, inside us, in our memory. Interstices is a journey inside sound, an aesthetic exploration of its components. In Interstices the string quartet meets electronic music in a poetic landscape, instruments become sometimes filters, sometimes synthesizers, not just imitating superficially these electronic devices, but abstracting their functionality: giving form to sound, operating on matter. Musical morphologies in Interstices may be seen as the reflections of the interior of a complex sound object, expressed in different time spans: short transients are stretched becoming long unstable sequences, instruments modulate stable portions of the sound illuminating regions of its spectrum. These sound-paths take the form of processes, they evolve in different layers and invite to be listened in many ways. Take the path you prefer, and enjoy your trip.
Metal Hurlant has been composed for a percussion player (playing metallic instruments) and computer generated sounds. The hybridity of the piece serves a qualitative logic. Atonal music during the '20s and serialism later stressed what Adorno referred to as the inner logic of procedures. In contrast, this work follows the logic of the sound materials, not the logic of the procedures, to shape acoustic matter. The acoustic material comes from a studio recording of metallic percussion instruments. Spectral analysis of these sounds provides the raw matter for the composition. This data is a digital representation of the qualitative traits of metallic percussion. It defines the range of acoustic properties available for manipulation and determines the further behavior of qualitative traits in the overall composition. In this way, qualitative parameters supply compositional parameters.
Spectral analysis was used to explore what can be called the sound "metalness" of the selected instruments. Since the range of compositional operations is provided by the isolated sound metalness, to certain extent the qualitative structure of the material takes command over the compositional process. Moreover, the metalness ruling the computer generated sounds furnishes the morphological boundaries of theinstrumental part. Metal Hurlant is an expression of metalness sculpted on percussion and electronic sounds.
The electronic sounds for this piece were generated with Bill Schottstaedt's CLM using my ATS library for spectral analysis, transformation and resynthesis (see research activities).
For computer-controlled Disklavier. The textures and rhythms for this piece were generated with Common Music, using a granular/additive synthesis algorithm. The spectrum (harmony) and formants (dynamics) for the piece were derived from an analyzed sound, using the composer's own ATS (Analysis/Transformation/Synthesis) software. A vocal tone, transformed by means of transposition and frequency shifting, was used as a formal metaphor for the whole piece. Premiered at Stanford University, March 1996.
The piece was composed at CCRMA, Stanford University, during the summer of 1994. The generation, transformation, and mixing of the sounds for the compostion were done in a NeXT computer using Bill Schottstaedt's CLM. Its stucture presents a continuous evolution of a group of materials. Sounds objects undergo different kinds of mutations in short and long term, creating by their interaction textures with distinct morphology.
Performed at the following locations: Semana de la Musica Electroacustica, October 1994, Buenos Aires, Argentina; Stanford University, November 1994; Vibrations Composees, April 1995, Lyon, France; International Computer Music Festival, May 1995, San Diego; Synthese: 25e Festival de Musique Electroacoustique, June 1995, Bourges, France; Punto de Encuentro IV Festival Internacional de Musica Electroacustica, December 1995, Madrid, Spain; Universidad Catolica Argentina, December 1995, Buenos Aires, Argentina. The composition received an award at the 22e Concours International de Musique Electroacoustique, Bourges, France, 1995.
For computer-generated stereo tape. As the word 'collage' suggests, similarly to the work in the visual arts which is made by putting together various 'patches of color', this short piece is based on creating and overlapping many 'patches of sounds'. This work is the result of explorations in several different computer environments including CSOUND, STELLA and Music Kit, CLM where the basic materials, the timbres ("instruments"), are realized through additive and FM synthesis. The piece has been composed not accordingly to a pre-established project, but proceeding with little sections, fragment by fragment, leaving any possibility open, and with the constant intention to always keep the internal movement and energy alive.
Scattering of names like Achilles, Braiseis or Chryseis can only come from the old world. The probability of selecting such a name while in Ibero America might well be very odd. This is like choosing characters for a play or a novel, a login name for haut mail, a password or perhaps a new weather phenomenon in the Caribbean. Nevertheless, this name sounds like two syllables barely pitched if whispered but very flexible if shout. Achilles, Braiseis or Chryseis are expressive while sung in bossa nova or at la cosa nostra.
This is yet another composition for systems which mimic the vibrational properties of a musical sound. In this case Scanned Synthesis developed by Bill Verplank and Max Mathews at CCRMA during the last years of the past century, was used as the underlying material. The process for achieving this timbre was solved by scanning and manipulating several types of springs which give different and time mutant spectra. Control is achieved by mathematical modeling the haptics of the spring.
Scanned Synthesis is based on the psychoacoustics of how we hear and appreciate timbres and on our motor (haptic) abilities to manipulate timbres during performance. It involves a slow dynamic system whose frequencies of vibration are below 15Hz. The system is directly manipulated by motions of the performer. The vibrations of the system are a function of the initial conditions, the forces applied by the performer and the dynamics of the system.
This piece was composed using the Common Lisp Music and Common Music environments on Linux at CCRMA.
Wadi Musa or the Valley of Moses was the city of the Nabateans some centuries ago. A "rose-red city half as old as time," where sand has witnessed unconscious listeners of the whisper of dessert creatures, wind, and water. Well deep there, two thousand and one steps below, there is the Monteria Hat, a curious object indeed.
This is a composition for quenas (Andean pan flutes), cello and physical models of maracas and clarinet. The maracas belong to special breed of models developed by Perry Cook called PHISM (Physically Inspired Sonic Modeling). The polyrhythms are the result of combinations somewhat chaotic between the shake rate, seed quantity, and size of shell, thus inspired by the Monteria Hat.
Oranged (lima-limon) are colors with bright spectrum out liners of geometric segments and shapes over gray scale pictures. In this music they synthesize several combinations contrasting over a variety of shades of noise...
Oranged (lima-limon) is also a fragrance and personality: An orange jacket on a bright day and a yellowish lime jacket at night. The spirit of freshness and love for dogs.
This piece was composed using Frequency Modulation and only Common Lisp Music on Linux at CCRMA.
ppP in its concert version (there is a museum version), is an algorithmic composition for traditional acoustic piano and modeling of the piano. This piece uses a computer model of a piano in an unusual tunning as contrast and complement to the real instrument on-stage. The software piano has indefinitely vibrating strings, non-standard temperaments and different string lengths and densities for the same pitch. In this piece the physical model has been tuned to the Bohlen Pierce scale. Additionally, the context surrounding the string can change-it need not be struck by a hammer or resonated sound-board. ppP, stands for perfectly pitched piano or perfectly perceived piano but also might also mean pianissimo and rather not in regards to dynamics. This piece was composed using Scott Van Duyne's Physical Model of the Piano developed at CCRMA in Common Lisp Music.
Los Vientos de Los Santos Apostoles is a composition for fixed length models of organ pipes tuned in the Bohlen-Pierce Scale as described by Mathews. Models of pipes are based upon flute models by Vesa Valimaki and Matti Karjalainen also algorithmically described by Perry Cook in the Synthesis Toolkit. The settings for the performance of this piece include an enclosed space or room; red, blue, green and yellow lighting sources coming from the roof and the sides; four or more CD Players with track shuffle and automatic repeat; a set of three or more pairs of loudspeakers placed strategically to create illusion of space; mixing console and amplifiers. The CD players can be replaced by a PD (pure data) patch emulating sound-playing and enhancing interaction with the use of sensors. This article is a description of this composition roughly perceived as a sound installation on which a listener or group of listeners interact or react to the controls, sensors, interface and sound perceived.
The goal of this composition is to create a perceived composition or the illusion of a composition tailored or customized to the listener taste but according to some constraints developed by the composer. The constrains follow the next mainly technical guidelines: about 45 sound-files can be performed by four or more virtual players (CD - Players or PD-Players). They can be performed sequentially or superimposed. One player can perform all sound-files sequentially depending on various determined order types: least to Max, palindromic or random. The duration in this choice can exceed 75 minutes and therefore is not too desired by most visitors. Superimposed sound-files can benefit of harmonies given by the nine degrees of a diatonic scale built upon the Bohlen - Pierce scale. In this sense counterpoint can be perceived while pitch abstraction and line contrast create generative melodic grammars in the mind of the listener. This idea creates the illusion of composing melodies while mixing different sound-files, its textures and intensity.only one given visitor is in control of the interaction in the installation although a group or ensemble of visitors might also interact. The actions for interacting are: sound-file choice, sound-file start or stop loudspeaker choice, panning and mixing.
Both computer-generated compositions were premiered at Stanford University in 1995 and later the same year presented at Centro Cultural Recoleta, Buenos Aires.VoxII was also performed at International Days of Electroacoustic Music, Cordoba, Argentina, December, 1995 and received Juan Carlos Paz electroacoustic music prize 1995 granted by the Fondo Nacional de las Artes, Argentina.
The basic aim while composing these pieces was continuing the exploration of sound materials of ethnic and art rock music. As copyright laws are not concerned with large musical structures (forms) or very small ones (sounds), Jorge Sad worked in the border zone in which a sound or small group of sounds are still recognizable as belonging to a particular style, composer or player but is integrated in a totally different musical context.
Craig Stuart Sapp
An algorithmic-composition inspired piece for percussion quartet. The title of the composition derives from responsoria prolixia (great responsories) which are a prominent feature of Matins in the Office. The performers play a large role in shaping the form of the piece. There is always one performer at any given time who leads the other three through the composition which consists of 20 phrases of between three to eight beats. The leader informs the other performers of which phrase will be played next by sounding a unique two-beat rhythmic pattern. Two beats before the end of the phrase, the leader again chooses a new phrase, etc. A leader can pass-off their leadership role to another performer by playing a specific pattern.
There are five levels of phrases in the composition. The first phrase level contains 3 beats, and then each additional phrase level adds another beat. Each time a leader gains control of the composition, he/she may select phrases from the next higher level; for example, if a performer has just become leader for the second time, that performer may choose to play any of the phrases in levels 1 and 2. Once the original leader has become leader for the sixth time, that performer may choose to end the composition with one of three cadences.
This work is an exploration of subtle overblowing effects using two virtual ``blowed string'' physical models and a live saxophonist. The physical model algorithms were created and controlled using the Synthesis ToolKit in C++, a software environment by Perry Cook and Gary Scavone.
The saxophonist in Air Study I fingers a low B-flat for the entire piece (clamps may be desirable). The pitch/timbre variations are completely controlled via embouchure and oral cavity manipulations. The tape part for Air Study I was generated with two independent physical models, each assigned to a single stereo track. Phasing effects were created by slight variations in vibrato rates and pitches. Overblowing effects were controlled via reed and breath pressure parameters.
An alto saxophone body without holes was used for the premiere of Air Study I on 25 July 2002. Gary expresses his sincerest gratitude to The Selmer Company and Tom Burzycki for their donation of this instrument.
Bernd Hannes Sollfelner
A computer-generated tape piece, premiered at Stanford University, April 16, 1995, performed in Vienna, Austria, 1995-1996.
Premiered May 24, 1999 in Stanford's Dinkelspiel Auditorium, this piece demonstrates the use of my sonification research as applied to algorithmic composition.
This seven minute composition was generated from a one minute digital recording of improvised piano. Using Bill Schottstaedt's snd program in the linux environment, I stretched and pitch shifted the sample many times using granular synthesis until I arrived at a sound much more timbrally rich than the original solo piano.
Sound and video installation in collaboration with Ann-Sofi Sidén. The Modern Museum, Stockholm, Sweden (2004).
Sound and sculpture installation in collaboration with Yaeko Osono. Burgkolster in Lübeck, Germany (Jan. 26 - Mar. 2, 2003).
The focus of this collaboration work is to portray closely the relationship between sound and vision (their movement, shape, and color), in a three dimensional spatial environment, while giving both of them equal importance. It is important that both of these two mediums serves not only to enhance the materials of the other medium, but also to lead, dominate, and at times, even to express completely contrasting and independent ideas from the other. This work consists of three contrasting movements that are simultaneously played without any interruption. These three movements are each based on the same main sound and visual materials that recur and transform throughout the piece.
This work was realized at Technical University of Berlin Electronic Studio and was commissioned by DAAD and TU-Berlin Electronic Studio.
I imagine sounds that are visible, that constantly transform into different forms, sizes, and colors, as they travel through the air at different speeds. The timber of the tape material is based on my personal characterization of the flute: metallic, sensitive, fragile, light, and warm, but cool at the same time. These sounds of the tape are at times as small and gentle as grains of sand, and at times, as unyielding as a mass of metal. The transformations of both tape and flute take place as they react to the sound of one another. The tape material react especially to the air pressure of the flute, as if the air that emerges from the instrument physically blows away and breaks the tape sounds into small particles. At the same time, the traveling force of the tape material in the air causes the flute to react back.
Influenced by Japanese traditional music writing, this work also focuses on the relationship between two elements of sound: noise and pitch, using the sound of the flute as the basis of the piece. The transformations of a single key-click into long sustained notes can be heard through out the piece.
This work was realized at Technical University of Berlin Electronic Studio and was commissioned by Sender Freies Berlin Radio. This work received a Musica Nova Honor Prize 2002, and was selected as a Finalist in the Russolo Electroacoustic Music Competition 2002.
Sift was commissioned by MATA and is dedicated to Carla Kihlstedt. This piece conveys the relationship between two elements of sound: noise and pitch. These two elements are emphasized as separate voices by assigning each to an instrument: noise to tape and pitch to violin. Throughout the piece, the exchange of these elements, and the transformation from one element to another can be heard. All computer-generated sounds are derived from the sounds of the violin used in this work. Similarly, the violin often imitates the sound of the computer-generated material on the tape. The violin sounds were manipulated and recorded for the music of the tape using sound editing programs such as CLM, Snd, and Pro Tools.
Yoei is a Japanese word, which describes a sound that rings and flutters in the air, resonating in one's ear long after it has been heard. This piece exploits many different acoustic movements to create this effect, with six percussionists and the electronic sound, surrounding the audience in order to complete the spatial environment. The primary goal of this piece, however, is not merely to create sounds, but to combine the world of the visual with that of sound. I have stretched the role of the dancer from merely visual, to both acoustic and visual - creating a live instrumental performer - a dancer who triggers and controls the electronic sounds in real-time using the five electric sensors that are attached to his/her body. All the computer generated sound sources derive from the sounds of the percussion instruments used in this piece, and similarly, the percussion often imitates the sound of the computer-generated sounds of the CD and the dancer. The percussion sounds were manipulated and recorded for the music of the CD and the dance using sound editing programs such as Sound Editor and CLM.
A collaboration work for computer generated tape, dance, and pre-programmed robotic lights. Performed at Little Theater, Stanford California: Roble Dance Studio, Stanford, California.
retour, French for 'return,' was my first electro-acoustic work after a 3 year hiatus from tape music composition. It was also my first multi-channel work, and was composed primarily using Csound. The sections and sounds are based on short (under one second) samples of acoustic recordings. Sampled sounds include small Nepalese bells, my wife laughing, and a ringing kitchen timer. retour attempts to explore the line between static rhythmic events, chaotic events, and points somewhere in-between. Common Lisp Music (CLM) was used create the final eight-channel mix.
Composed using Csound, Snd, and Protools. As titled, it was a study in which I experimented with various Csound opcodes and methodologies that I had not used before. The result is a work in which acoustic sounds are transformed into insect-like timbres, creating a rainforest soundscape. As in retour, there is an emphasis on the contrasting of rhythmic and arhythmic sounds, and points in between. I chose to do this work in two channels, and arranged it using ProTools, as I wanted a more ``immediate'' composition experience than the one I had creating retour.
Nine compositions for prepared piano, drum, real time sound processing, computer generated sounds on tape. Performed by Marco Trevisani, Lukas Ligeti, Nick Porcaro, Michael Edwards.
For prepared piano, prepared guitar and computer generated sounds on tape . Performed by Marco Trevisani, Davide Rocchesso and Michael Edwards.
A computer theatre/music production for actors, singer, live electronic and computer triggered sounds/mixing with ArtiMix. Based on a Pirandello's play.
A computer generated tape composition, inspired by Bruno Maderna's Aura.
Outside the Box premiered at the ``Made in Canada Festival'' in Toronto Canada, performed by New Music Concerts, under Robert Aitken. This work for flute, clarinet, piano, percussion, violin, and cello, was commissioned by the Fromm Foundation at Harvard, and was broadcast live on the CBC radio show ``Two New Hours''.
Borderline for cello and tape, premiered April 15th, 1998 at the East Cultural Center in Vancouver, Canada. Commissioned and performed by the Canadian cellist, Shauna Rolston, Borderline features a lyrical style in the cello contrasted by a diverse electronic tape part, constructed using various analog modelling synthesis programs.
Slipping Image for mixed quartet and tape was performed at the 1998 ICMA conference in Ann Arbor Michigan. It was also chosen to be on the 1998 ICMC Compact Disc.
RealAudio recordings of all these works are available at http://ccrma.stanford.edu/~cello/.
In Principio Erat Verbum (In the Beginning Was the Word), for tape, is an introduction to a work in progress. Its individual parts are based on several statements from the New Testament. The first three sentences from Evangelium by John were used as the initial text and source of inspiration for this introductory movement. The piece is a reflection upon the dialectic relation between concrete (knowledge, experience) and abstract (intuition) meanings of spoken words and their origin, which is also joined with the sacral roots of human beings. The form of this piece reflects the cirlce model of creation of the World that is hidden in the initial text of St John Evengelium. The composition evolved from material which was collected last year at CCRMA. The principal rhythmic structures, as well as some of the individual samples, were recorded using the Korg Wavedrum instrument and a grand piano. All this material was later pro cessed through Bill Schottstaedt's CLM, Paul Lansky's RT, and ProTools.
This composition is based on the combination of samples of human breathing, as a basic sign of human life, with music sequences and live soprano sequences. There is a very simple time line, where each ``physical'' gesture of the real world - a breath - is a source or an anticipation for the musical event. This dialog, which can literally be described as a contact of the real and imaginary world along two planes of the human mind, was the initial idea that started the work on this piece.
Voice was first realized in the Experimental Electroacoustic Studio of the Slovak Radio in Bratislava and performed in Bourges (1997) and Bratislava (CEM 1997). It is now ``under reconstruction'' at CCRMA.
|© Copyright 2005 CCRMA, Stanford University. All rights reserved.|