The following composition entries were part of past versions of the CCRMA Overview and are being maintained here for historical purposes. This list is not comprehensive.
Both are tape pieces using spectral modeling (SMS), sampling and granular synthesis in CLM and CM's Lisp environment. Piece of Mind was awarded "Premio Sao Paulo '95", Brazil; recorded on a CD released by the II SBCM and CRCA (UCSD) in 1995.
All blue ... was composed in 1996 for four-channel tape. This title was drawn from the writings of Walter Smetak (composer, instrument-builder, cellist and writer) to whose memory the piece is dedicated. The piece is about sound transformation, as a metaphor to the transformation of consciousness. Metallic percussion sounds are ever-present, while original cello sounds are broken into their rawest components. The basic cuisine for the piece was set up from these spices, and the dish is to be served hot. The cello has its identity transformed: its defining harmonic series is turned inharmonic, sounding closer to the metallic percussion. The pitches from this now bent, inharmonic series, are used as framework for a melodic-timbral game (the ``blue pencil on a blue sky'') played by cello and percussion. The cello transformations were obtained with SMSplus, a CLM system built on top of Xavier Serra's Spectral Modeling Synthesis and developed by the composer. A procedure for modeling the physical properties of a room via feedback-delay-networks was employed (``Ball within a Box'', developed by Italian researcher Davide Rocchesso at CCRMA, with additional enhancements by the composer). All blue... won the 1997 ``Premio SaoPaulo'' at the 2nd International Electroacoustic Music Competition of Sao Paulo, Brazil and is available on volume one of the Computer Music @ CCRMA CD.
Monologue for Two for flute and clarinet (1993, revised and scored in 1997) is an investigation into unusual 'everyday life' facts. According to it, there is one day when you can't recognize an intimate friend, or you may suddenly realize you've become intimate to a most hostile enemy. Inasmuch, a dialogue can turn into monologue while still involving two players. The basic pitch materials in Monologue for Two were generated by computer programs in Daniel Oppenheim's Dmix environment for composition. The piece is dedicated to the memory of composer Ernst Widmer, who was quite aware of those everyday life 'compositional' facts. The piece is part of a cycle which also includes Dialogue in One, for piano. Monologue for Two received its first performance on Nov. 13th at the CCRMA 1997 Fall Concert in Campbell Recital Hall, Stanford University. The piece was performed by Karen Bergquist (flute), and Larry London (clarinet).
Born in Palo Alto, California, Celso Aguiar grew up in Brazil in the town of Salvador, Bahia, where he studied composition with Swiss-Brazilian composer, Ernst Widmer. Since then he became interested in electronic music and went on to develop a computer-controlled digital synthesizer in Brazil. He is currently a DMA candidate in Composition at the Center for Computer Research in Music and Acoustics where he has been developing software tools for composition with spectral modeling, granular synthesis and sound spatialization. Celso Aguiar has written music for traditional instrumental as well as electronic media. His contact with composer Jonathan Harvey at Stanford has awakened in him a clear awareness for the spectral domain in music. Along with the skill for applying new DSP techniques, his compositional metier has been evolving towards an interesting amalgam of natural sounds and their most pungent transformations. His compositions have been performed in the Americas, Europe and Asia. His awards include both 1995 and 1997 ``Premio Sao Paulo'' at the 1st and 2nd International Electroacoustic Music Competition of Sao Paulo, Brazil, and a 1998 ICMA Commission Award.
Ron Alford
Girltalk is about my infatuation with children; what they think, and how they perceive our modern world. Children provide a fascinatingly uninhibited view, quite outside my adult reference, so I am forced to see things in a new light. This is the first of a series of related compositions using material from the world of children. This music was created algorithmically, using CM and CLM in a Linux environment (though the sound-sculpting was done on a Macintosh) during the summer of 1997.
Ron Alford studied at the University of Illinois, the University of Colorado, Adams State College and at Stanford University. He has studied with George Crumb, Larry Hart, Wayne Scott, Vladimir Ussachevsky, and Cecil Effinger. He taught music in the American Southwest. He has been an active musician performing in symphony, chamber, jazz, church, rock, performance-art all his life. He has written, arranged, conducted, and judged music events. He has operated recording studios, hosted opera and 20th century music on commercial and NPR FM radio. He was founder of the New Mexico Jazz Workshop. He has been recipient of grants and National Endowment awards, and recently received an Arts Council award fellowship for Santa Clara County. His music has been heard in Canada, Austria, England, Denmark and Germany.
alt.music.out
Live interactive improvisatory piece by Nick Porcaro and David Rhoades, where the performers improvise in harmolodic manner over several phrases of jazz-based material. Running on several NeXT computers, the SynthBuilder application is used as a real-time effects processor and Physical Modeling (PM) synthesizer. Both the performers and a sound engineer have control over the effects. alt.music.out are: Emily Bezar - Processed vocals; Roberto DeHaven - Processed drums, saxophone; Scott Levine - Effects processing; Nick Porcaro - Processsed grand piano and PM piano; David Rhoades - Processed saxophone; Pat Scandalis - PM electric guitar, electric guitar; Tim Stilson - Effects mixing. Premiered at the CCRMA Annual Summer concert, July 1995.
This is a revised and expanded version of a piece I started writing in 1996. The first version was performed in Jerusalem and the current version is scheduled for concert at Stanford, Feb. 15 1999.
Oded Ben-Tal is currently working on a piece for violin, percussion and 8 wind instruments, as well as a short cello and tape piece.
Calyx is the most recent in a series of pieces exploring the possibilities of a set of filters connected in an infinite feedback loop. No initial stimulus to the filters is necessary - the internal noise of the system, amplified endlessly via the loop, is sufficient to produce a rich palette of sounds.
The sonic results of this process are entirely context-dependent; the musical possibilities at each moment are limited by the state of the loop at the previous moment. As the filter settings are changed in real-time,the sounds produced by the loop can be shaped into musical forms. Calyx is a recording of such a real-time "performance," controlled by computer and refined over a period of months.
Escuela is the second in a series of piano pieces which somehow refer to places where I've lived - in this case, my first home in California, on Escuela Avenue. In this case, the piece is (almost inevitably) bound up in my early experiences as a graduate student, thereby enriching the double meaning of the title.
In Escuela, a computer is employed to modify the sound of the piano in real time. The performer controls the software from the piano keyboard, applying ring modulations which precisely reflect the pitch structure of the original piano music. The result is a kind of mirroring - at a microscopic level, the electronics describe the piano's music in the way that they alter its sound.
A solo CD of new music by Matthew Burtner, ``Portals of Distortion: Music for Saxophones, Computers, and Stones'' was released in January, 1999 by Innova Records (Innova 526).
Joanne D. Carey
Adventures on a Theme for flute and radio-baton is a flute concerto whose synthesized orchestra includes singing voices, marimba, percussion and guitar as well as strings, woodwinds and brass, often in unusual combinations. Although there is no story-line associated with it, the music seems to tell a story in its winding and wayward lyricism. The middle movement, ``Topsy Turvy and Haywire'', which is entirely improvised, presents another view of the protagonist, the theme. From a compositional standpoint, techniques of variation are explored in each movement, all of which are based on the same theme. The improvised middle movement is based on original programs by the composer which explore ways of varying a pre-composed melody in real-time with the batons. Ideally, the radio-baton and flutist would improvise together. This was realized in a recent performance at the Palo Alto Cultural Center in December 1998, as part of a NACUSA concert. The piece was premiered in San Diego on October 10, 1997 at the Fourth Annual International New Music Festival, sponsored by the University of San Diego. At its premier, the radio-batonist performed a solo improvisation.
The last interactive piece of a trilogy for soprano and radio-baton. Gracias, as well as its companions La Soledad (1992) and Aqui (1993), was inspired and influenced by Spanish Flamenco and indigenous South American music, and the later poetry of Chilean poet Pablo Neruda. The spirituality and humanity of this great poet continues to impress the composer deeply. In the process of blending Neruda's poetry with the rhythms, flourishes and instrumental sound of these Spanish and South American musical traditions, Joanne Carey drew mainly from strains of solitary meditation and deep sorrow buoyed by irrepressible exuberance and hope. The scores of the electronic accompaniments were created on a MacintoshIIfx using the DMIX composition program developed by Daniel Oppenheim. The sound material for these songs was generated on a Yamaha SY77. Most of the voices are presets, with the exception of the bell sounds and a couple of hybrid sounds that were constructed by the composer and a "sliding sigh" sound developed by Dr. Oppenheim.
The composition has been widely performed: San Jose Chamber Music Society, 1995; SEAMUS conference in Ithaca, New York, 1995; IBM Research Center, Yorktown, New York, 1995; International New Music Festival, San Diego, 1995; University of Maryland, Demo concert with Max Mathews, 1995; Radford, Virginia, Demo concert with Max Mathews, 1995; Peabody Conservatory, Baltimore, Maryland, Demo concert with Max Mathews, 1995; National Association of Teachers of Singing, Winter Vocal Symposium, 1996.
With Scott Walton for celletto, feedback guitar physical model, disklavier and computers. Premiered U.C. San Diego, 3 April 1997.
Kui Dong
Youlan, for tape (two or more channels) and syncronized slides (done by visual artist Ruth Ecland), was realized at CCRMA between March 1-22, 1997. Youlan, a winding journey of exploration, is a term derived from classic Chinese poetry and music. The word connotes elements of the excitement of discovery, the lure of the unknown, and the elevation of the ordinary to a place of peak experience. The music is the map through this world, providing both context and direction. Samplings of ancient Chinese instruments have been transformed through digital processing and manipulation to create new sound structures that are evocative of their origins. The dynamic range of this piece is widely distributed: beginning with a highly tense drama, the piece slowly quiets down to an spiritually tranquil end after a series of sound material developments. The samples of steel plate Chinese instruments were processed and mixed using Spectral Modeling Synthesis (Xavier Serra and Raman Loureiro), Common Lisp Music (Bill Schottstaedt), and Real Time Mixer (Paul Lansky and K. Dickey) on a NeXT workstation at CCRMA, Stanford.
The contents of the piece concerns an unfinished childhood dream where, with the unlimited imagination of the child, a walk is taken through a colorful and unspoiled world. The piece was composed using DMIX, a newly developed software for Macintosh. It was programmed with extreme nesting patterns, forming a simple idea which then grows into a complex pattern. While composing, Kui Dong does not think excessively about tools and techniques, instead she listens for what best fits her overall concept for a piece of music, looking for the right color and shape for each sound. Purity and sincerity are truths that guide her. In Flying Apples she attempted to catch the transparent, brilliant stars falling from infinity.
Performed at Visual Symbols, San Jose; Stanford Univeristy; ICMC 1995, Banff, Canada; LIPM, Buenos Aires, Argentina.
A chamber opera in three acts for eight voices, ten instruments, and a tape with a duration of approximately sixty minutes. To be premiered by Other Mind Festival in November, 1996.
The story is based on the famous play but goes beyond the original libretto by delving specifically into the themes of identity and desire. The Ice Princess, who is also the central figure in Puccini's opera, is named Cess and is an idolized underground nightclub performer. Part of her on and off stage attempt to thwart admirers is to offer the challenge of cracking the enigma of who she really is, with the risk that the wrong answer will bring death. A gangster, new to the area, takes up the challenge and through a series of dream-like realizations discovers that Cess is a Hermaphrodite. When he reveals her identity the crowd grows enraged. The mystique of their idol has been disclosed and they retaliate by savagely murdering the gangster.
Kui Dong was a winner of the 1994 Alea III International Composition Prize, Boston; 1990 National Art Song Competition; 1989 National Music and Dance Competition, Beijing; 1995 ASCAP Grant for Young Composers; 1995 Santa Clara Commissioning Award for Art; 1995 Djerassi Foundation for Art; 1993 Asia-Pacific National Fund; 1997 meet the composer/USA commissioning prgram award. Kui Dong is currently on the music faculty of Dartmouth University.
A DMA final project in honor of the Internationational Year of the Ocean (1998) and Yemaya, Mother of the Sea.
Songs of the Sea is a cycle in ten sections with a mutable form which reflects the flexibility of the water element and having a variable length depending on which of the ten sections are performed. It is the last in a series of aural environments for a poet/photographer's mixed media installation. For this particular project, there are five sections of poetic text which are dramatically read over collages of sampled environmental sounds, algorithmically-generated sections seeded with Indian musical motives and coded with Common Music/Stella, and electric guitar improvisation which has been signal processed. Interspersed between these sections of performance poetry with recorded backgrounds, each portion of text is also set for soprano over written compositional material based on jazz chord progressions for synthesizer or piano, electric or acoustic guitar, and celletto or cello.
Cem Duruoz
An interactive piece for Classical Guitar, NeXT (Physically Modeled SynthBuilder Flute), Macintosh (sequencer). Performed at Stanford University, February 4, 1996.
Cem Duruoz is a recepient of Stanford Student Soloist Award (1992); Winner, Turkish Classical Guitar Competition (1984), and Semi-finalist, Guitar Foundation of America International Competition, (1993, 1995).
Michael Edwards
Composed for stereo tape using samples processed in CLM, and note lists generated by Common Music, mixed using Paul Lansky's RT Mixer app. Performed at Stanford University; Thessaloniki, Greece; Buenos Aires, Argentina, 1995, Belfast, Northern Ireland 1996.
Michael Edwards and Marco Trevisani
segmentation fault beta 1.0 is a composition for prepared and digitally processed piano, and computer mixed sound files. It uses software (artimix) written by Michael Edwards to trigger and mix sound files stored on hard disk. With this software, sounds are mapped to the keys of the computer keyboard and triggered at will during the performance. Each sound can also be mapped to a specific MIDI channel so that individual gain control can be applied to each sound in the mix through the use of a MIDI fader box. The computer part therefore consists of triggering prepared sounds and controlling their relative amplitudes. This piece is a collaboration between the two performers (Marco Trevisani, prepared piano, Michael Edwards, computer), both of whom are composers. The sounds used were created by the composers using Common Lisp Music, written by Bill Schottstaedt at Stanford University. They were realised with sample processing and manipulation of sounds from various sources, including piano, prepared piano and cello, as well as through direct synthesis using Frequency Modulation techniques. The piece was ``upgraded'' at the end of 1996 to segmentation fault beta 1.1 and was performed at the Opus 415, No. 2 music festival in San Francisco. A multi-track studio recording was made in the summer of 1997.
R.J. Fleck
Part of an interactive performance environment employing movement, sound and sculptural forms, performed at Stanford's Memorial Auditorium and created at CCRMA; the result of a grant received from a consortium of Stanford arts faculty. Featuring a reading by vocalist and CCRMA-associate Emily Bezar, the sound design focused on the creation of soundscapes through the computer processing of previous readings of a composed text, and the real-time processing of both readings of the same text in performance, and other aspects of the performance environment. An early version of SynthBuilder was an essential element of the final performance configuration.
Now living in San Francisco, R.J. is working to complete his program for the Doctorate of Musical Arts degree this year at CCRMA.
Doug Fulton
For computer-generated tape.
The title refers to a character (or hexagram) from the I Ching (the Chinese Book of Changes), whose meaning is concerned with the way subtle forces-over a prolonged period-can often have a powerful and penetrating effect. The results of which, although "less noticeable than those won by surprise attack, are more enduring and more complete". Composed in 1991 (but extensively revised in 1995), The Gentle is the first of what became a series of three pieces for 3 female voices, all of which being written for the group, Scottish Voices, directed by Graham Hair. Musically, the `subtle forces' at work are the pure sound of female voices and vibraphone, and the repeated phrases (one for each of the hexagram's 6 lines) which gradually transform, one into the next. There are certainly no `surprise attacks', and the music is meant to conjure up a mood appropriate to its title.
The composition takes its title from a character (or hexagram) in the I Ching (the Chinese Book of Changes). The Well is concerned with the timelessness of existence, and is said to contain its entire history within itself. The Well is the"unchanging within change"; the constant around which all else is in a state of flux.The piece was commissioned by and composed at the University of Glasgow in 1992, and is the second in a series of three pieces for 3 female voices, all of which being written for the group, Scottish Voices, directed by Graham Hair.The tape part consists of the layering of several strands of looped patterns, which eventually form a dense and `watery' texture. Weaving around each other, the voices rise in pitch and intensity, emphatically expressing a single idea-the meaning of which can be felt, but not conveyed in words (hence the absence of text).For the tape part, material was generated using algorithmic composition techniques, and the specific loops were arrived at after a careful process of selection and editing.The sounds were largely created on a basic FM synthesizer, the Yamaha TX-81Z.
The piece completes a series of three pieces for 3 female voices,all of which having been written for the group, Scottish Voices (directed by Graham Hair). Like its companion pieces, The Joyous is based on its eponymous character(hexagram) from the I Ching (the Chinese Book of Changes),the qualities of which in this case can be described as inner strength and firmness within, combined with acquiescence and softness without.The piece begins softly with a (hocket-like) pattern spread across the 3 voices, accompanied by (scale-like) figures of changing phrase lengths on the harp. The material is subjected to a variety of transformations, involving pitch, meter, rhythm,and mode, but proves ultimately to be unbreakable as its identity remains intact throughout. This musical journey (process) hopefully depicts in sound, some of the attributes embodied by The Joyous.
The installation piece received its premiere at an outdoor location - the 18th-century formal gardens of Greenbank House in Glasgow, Scotland, April 1995. The 4 soundpieces (with a combined duration of over 60 mins.) were composed during a 3-month period of intensive work from January to April 1995.
Awakening invites the listener on a journey through sound. From a state of initial dormancy to the realisation of some ultimate goal, the four component soundpieces represent stages along the way. Non-realtime additive and FM synthesis, and effects processing of sampled sound (using the software package CLM (Common Lisp Music)), created much of the material for Slumber. Algorithmic methods (using the software package Common Music) in which the parametric values of events (eg. pitch, amplitude, as well as timbral details) were determined according to their position in a metrical hierarchy, generated the patterns used in First Steps - these triggering sounds on a Yamaha TG-77 synthesizer. Quest makes use of more-or-less untreated but unusual sampled sound sources (eg. recordings made inside a large empty drinking water container produced the low percussive sounds; and the rustling of a large piece of hardboard produced the percussive sound employed as a cross rhythm) and was assembled entirely using a MIDI sequencer and sampler. For Confluence, granulated, time-stretched, and time-compressed water samples were layered to form a slowly evolving sound texture.
Cosmos is a composition for electronic synthesizers, radio baton, and computer. Designed for live, solo performance, the radio baton and computer keyboard are used as controllers in conjunction with custom-built software running on an Apple Power Macintosh computer. Using analog, frequency modulation, and sample playback synthesizers, the composition achieves gradual yet dramatic transformations of timbre and intensity which are activated by movements with the radio baton. Cosmos received its premiere at the CCRMA Summer concert in July 1997.
Nicky Hind is presently working on further compositions to complement Cosmos, for an onging series of solo performances which will take place in a variety of venues and cultural contexts...
A seventy-minute concerto in seven ten-minute movements for Boie-Mathews Radio Drum-controlled Disklavier.
Instrumentation: Grand piano and ensemble of plucked string and percussion instruments: mandolin, guitar, harp, harpsichord, bass, 2 percussionists, harmonium.
The piece recieved its world premiere by the San Francisco Contemporary Music Players in February, 1998 at the Yerba Buena Theatre in San Francisco. The San Francisco Chronicle described it as ``a splendidly kaleidoscopic series of sketches, by turns exuberant, contemplative and austere.'' Work on the piece was supported in part by a Collaborative Composer Fellowship from the National Endowment for the Arts and the Banff Centre for the Arts, Canada. The Seven Wonders of the Ancient World was released on CD in 1996 on the Well-Tempered Productions label and was given an A+ rating by Audio magazine.
PROGRAM NOTE: Two statues, a temple, a roof-top garden, two tombs and a lighthouse. This rather odd collection of monuments has become famous as The Seven Wonders of the Ancient World. All but one, the Pyramids, has been destroyed, either by Nature or by human hands. A closer look at the ``Wonders'' reveals a crosshatch of parallels and oppositions. Two deal with death-the Pyramids and the Mausoleum. The Hanging Gardens glorify cultivated nature, while Artemis was the goddess of wilderness and wild animals. The two statues are of the heavens - Zeus, the god of thunder and rain; and the Sun god of the Colossus of Rhodes.
How can the essence of these monuments be conveyed in music? In searching for an answer, the composer discovered two revolutionary instruments: the Yamaha Disklavier and the Mathews/Boie Radio Drum. The Disklavier is a modern version of the old player piano in that it can ``play itself'', while the Radio Drum is a percussion-like device that translates a percussionist's three-dimensional gestures into computer information. In 1992, the author conducted a series of experiments combining the Radio Drum and Disklavier and discovered that the flexible and seemingly magical mapping of percussion gestures onto piano sound makes possible the grand, monumental, yet very uncharacteristically ``pianistic'' sounds that had been looked for. The sound of this Drum-Piano is further expanded by an unusual orchestra consisting of instruments that extend the sound of the piano: harp, harpsichord, mandolin, guitar, bass, 2 percussionists, harmonium. Finally, an improvisational approach to the Drum Piano part allows the performer to respond and react to his unusual instrument. The result is a new kind of piano concerto.
ZephyrBells is a composition for quadraphonic sound created using CLM (Common Lisp Music), SoundWorks and rt.app on the NeXT computer. I only used one sound source which is a synthetic bell sound for this piece. The basic idea is that we can hear the bell sounds from afar by the zephyr winds.
Dreaming is written for solo viola and computer generated tape sound. Its single movement consists of three sections. The first section can be described as Dreaming to Actuality; the second Actuality (viola solo); and finally a return to Dreaming. The source of tape sounds is entirely from an acoustic viola played by Keith Chapin, for whom the piece is written. The sound was processed using CLM (Common Lisp Music) on a NeXT computer.
For two sopranos, percussion and computer processed sounds on tape, using CLM, SynthBuilder, SoundWorks and RT on a NeXT computer.
The piece is performed in the dark under five candlelights. The idea of this piece is based on reverbing sound effect.
Live Stereo Sound Processing WP: CCRMA 11th Annual Industrial Affiliates Meeting, May 21-23, 1997, Stanford.
Protozoo is a real-time, generative composition that creates a sequential, variative form solely as a result of a system of few basic audio processing operations. As such, it forms an acoustical analogon to dynamical systems often found in phenomena like chemical reactions, population growth, or models of processes in ecosystems. The listener is presented with a ``zoo'' of acoustical pre- and near-''life forms'', simplistic, yet complex organisms, some of which develop to be more stable than others. Biological concepts like activation, inhibition, growth and death, transformation, digestion, inheritance and evolution come to mind and are helpful for the understanding of the composition.
Realizations of Protozoo may be either in the form of a sound installation, an interactive instrument (using MIDI controllers), or as an effects processor.
This piece was commissioned by NoTAM for the GRM Acousmonium sound system. The material is derived solely from three old native Japanese instruments: flute, metal-clock, and drum - and then rigidly processed by custom made DSP-applications I wrote exclusively for the piece.
For computer-generated tape. The piece was commissioned by the Norwegian Contemporary Music Organization and completed after two years of work. It was premiered in Oslo, also performed at Stanford, 1995.
The compositional technique is entirely based on digital signal processing. Several DSP applications in the C-programming language have been written exclusively for this piece, i.e. no commercial application has been used. Downcast serves as a presentation of these programs as well as a demonstration of a rather modern compositional technique which is a spinoff from the idea of using a general computer language (its code) as the musical notation.
The initial audio material for the piece is derived entirely from a recording of a female voice - throughout the piece, this voice is rigorously processed by the computer programs. The original sample,a recording of short laughter, can be heard in at the very end of the piece. Complex rhythmical syncopation is a crucial component for the composition. At times there are up to one thousand layers where the melody line jumps from one layer to another following the pattern of these syncopations. Elements such as dynamics and spatiality are also fundamental to the piece. Reverberant spaces are derived from actual physical rooms in the CCRMA-building (everything from the smallest closet to a large auditorium have been used for reverb impulse-responses). The convolution of those room-responses are combined into layers and used in the style of classical counterpoint. Since the composition is entirely processed and edited in the digital domain (no analog to digital converters have been used) the sound is significantly clean with a very high dynamic range.
Chuk-won is based on Samul nori which is a traditional from of Korean percussion music. Samul means ``four things'' in English and nori means ``performing''. The ensemble's members consist of two skins and two metals. The instruments symbolize earth (skins) and the heavens (metal). The instruments are identified with a constantly changing natural world. The metal instruments represent (1) Spring/lightening/thunder and (2)Summer/wind. The skin instruments represent (1) Autumn/rain and (2)Winter/clouds. It is said that if people play on these four instruments together, the resulting vibrations will harmonize earth and heaven into one universe. Sounds for this piece originate from recordings of skin and metal instruments used in the performance of Samul nori. During this performance, three video projectors display images that metaphorically combine with the music to reflect on the unity of creation. This piece forms the third part of a four-movement composition entitled Chuckwon which roughly translates as ``invocation''. This movement consists of electronic sounds only - other movements include a percussion quartet as well.
Lukas Ligeti
This piece is about impossible dreams. Many times and without learning from experience we build beautiful Paper Castles on Invisible Clouds, thinking yet again that dreams are reality, or maybe that they can be turned into reality with sheer will power and a magical wand. This sections are like twin brothers, intermingled yet separate. As for the last section, Electric Eyes, if one has ever felt the startling contact of electric eyes, there is no need for the composer to explain. If one has not, mere words will never be enough. That's the composer's dream and the cause of a lot of paper castle building...
The piece was composed in the digital domain using the CLM non real time sound synthesis and processing environment running on a NeXT and the four channel spatialization was performed by a special unit generator programmed by the composer. The original sound materials are sampled tubular bells, cowbells, cymbals, gongs, knives and screams and quite simple additive synthesis instruments. The first part was composed while the author was working in Japan at the Computer Music Laboratory of Keio University. It was latter finished at CCRMA.
Espresso Machine II is the second incarnation of the first piece that uses PadMaster, a new improvisation / performance environment built around the Mathew/Boie Radio Drum and written by the composer on a workstation running the NextStep operating system and a live electronic cello player (Chris Chafe playing his electronic Celletto). PadMaster is written in Objective-C and uses the MusicKit classes as the basic foundation for MIDI control and sequence playback. The Radio Drum interfaces with the NeXT through a custom MIDI protocol and is used to trigger and control isolated events and event sequences in real time. PadMaster splits the drum surface into programmable virtual pads that can be grouped in sets or "scenes", which in turn represent different behavioral patterns for the different sections of the piece.
Espresso Machine is an evolving dialog between the acoustic / electronic sounds of the Celletto and the contrasting timbres played by the composer on two TG77 synthesizer modules through the PadMaster program controlled by the Radio Drum. PadMaster essentially provides several pallettes of pre-built elements that are combined and controlled in real time to generated an electronic soundscape for the Celletto performance.
"Knock Knock... anybody there?" is an extension to four channels of the original stereo sound track that was composed for a collaboration project with visual artists in 1994. Willie Scholten and Ruth Eckland provided the sculptures and visual framework while this piece served as the sound environment for the installation. The music explores altered states of consciousness and in particular insanity, in a journey through a three dimensional soundscape where voices and sounds evoke multiple and conflicting states of mind. All the concrete sound materials used in the piece were gathered during a small meeting with friends where the central topic that motivated the project was freely discussed. From the digital recording small but significant fragments of the conversation were extracted and subsequently processed in the digital domain using CLM instruments (CLM, Common Lisp Music, a non real-time Lisp based software synthesis and processing environment). The processing included dynamic spacialization of multiple moving sources rendered for a four channel reproduction environment. The listener moves through the soundscape while voices and sounds tell several overlapping stories that might occur in the hazy border between sanity and insanity. The piece even includes materials from the piano jam session that happened at the end of the meeting...
"in a room - with room to grow - the fabric of space is floating veils, curtains and webs... alabaster light and tides of time play with them. Grandma is sitting in a rocking chair... she looks at me, smiles, and keeps knitting an infinite tapestry of gifts"
This is a solo piece for PadMaster (a real-time improvisation software package written by the composer), Radio Drum and MIDI synthesizers. PadMaster uses the Mathews/Boie Radio Drum as a three-dimensional MIDI controller. The function of the batons and the behavior of the surface of the drum are controlled through PadMaster and create a set of sonic soundscapes through which the performer chooses a path.
For large symphonic orchestra.
For Perry Cook's collection of shells.
For multimedia.
For solo percussionist with vibraphone and marimba.
For alto saxophone, cello, percussion and tape. While traveling - and wanting to remember the experience - photographs are usually taken of people and places. Not being the most diligent about getting photos developed, several trips usually get mixed together. Snapshots on a Circle is an aural collage of the moods and interactions of the people and places where they occurred in the photographs.
The title, Snapshots on a Circle, has a double meaning . The first is more literal in the sense that several of the photographs were taken during an extended lunch at a cafe on a plaza. The second is more universal in that most travels, no matter how long or how far, eventually wind their way back to their point of origin.
The tape portion of this piece was realized by sampling everyday environmental sounds and then processing them in CLM and SoundDesigner II on an Apple PowerPC. They were then compiled on the Dyaxis II using MultiMix 2.3.
For tape. What is a question? How are questions formulated? As the mind wrestles to grasp the concept of a subject, inevitably, questions begin to form . But not all questions are created equal. Some may be ill-conceived and make no sense, resulting in more confusion. Some well thought out questions, once asked, can be enlightening but raise yet further questions. Some are really not questions at all, but simply a reiteration of the subject in the inquirer's own words in an attempt to understand. Sometimes frustration ensues, and the inquiry must be reapproached. From thought to vocalization this piece explores the musical texture of a question.
This piece was realized through the use of granular synthesis, spectral reshaping and the resampling of vocal samples and computerized instruments. All signal processing was done on a Apple PowerPC.
For computer-generated stereo tape.
``They know that a system
is nothing more than
the subordination
of all aspects of the universe
to any one such aspect.''
- from Tlon, Uqbar, Orbis Tertius by Jorge Luis Borges
Sound. Sound as a metaphor of life, as a living entity that gets transformed with us, inside us, in our memory. Interstices is a journey inside sound, an aesthetic exploration of its components. In Interstices the string quartet meets electronic music in a poetic landscape, instruments become sometimes filters, sometimes synthesizers, not just imitating superficially these electronic devices, but abstracting their functionality: giving form to sound, operating on matter. Musical morphologies in Interstices may be seen as the reflections of the interior of a complex sound object, expressed in different time spans: short transients are stretched becoming long unstable sequences, instruments modulate stable portions of the sound illuminating regions of its spectrum. These sound-paths take the form of processes, they evolve in different layers and invite to be listened in many ways. Take the path you prefer, and enjoy your trip.
The piece was composed at CCRMA, Stanford University, during the summer of 1994. The generation, transformation, and mixing of the sounds for the compostion were done in a NeXT computer using Bill Schottstaedt's CLM. Its stucture presents a continuous evolution of a group of materials. Sounds objects undergo different kinds of mutations in short and long term, creating by their interaction textures with distinct morphology.
Performed at the following locations: Semana de la Musica Electroacustica, October 1994, Buenos Aires, Argentina; Stanford University, November 1994; Vibrations Composees, April 1995, Lyon, France; International Computer Music Festival, May 1995, San Diego; Synthese: 25e Festival de Musique Electroacoustique, June 1995, Bourges, France; Punto de Encuentro IV Festival Internacional de Musica Electroacustica, December 1995, Madrid, Spain; Universidad Catolica Argentina, December 1995, Buenos Aires, Argentina. The composition received an award at the 22e Concours International de Musique Electroacoustique, Bourges, France, 1995.
For computer-controlled Disklavier. The textures and rhythms for this piece were generated with Common Music, using a granular/additive synthesis algorithm. The spectrum (harmony) and formants (dynamics) for the piece were derived from an analyzed sound, using the composer's own ATS (Analysis/Transformation/Synthesis) software. A vocal tone, transformed by means of transposition and frequency shifting, was used as a formal metaphor for the whole piece. Premiered at Stanford University, March 1996.
Fiammetta Pasi
For computer-generated stereo tape. As the word 'collage' suggests, similarly to the work in the visual arts which is made by putting together various 'patches of color', this short piece is based on creating and overlapping many 'patches of sounds'. This work is the result of explorations in several different computer environments including CSOUND, STELLA and Music Kit, CLM where the basic materials, the timbres ("instruments"), are realized through additive and FM synthesis. The piece has been composed not accordingly to a pre-established project, but proceeding with little sections, fragment by fragment, leaving any possibility open, and with the constant intention to always keep the internal movement and energy alive.
Jorge Sad
Both computer-generated compositions were premiered at Stanford University in 1995 and later the same year presented at Centro Cultural Recoleta, Buenos Aires.VoxII was also performed at International Days of Electroacoustic Music, Cordoba, Argentina, December, 1995 and received Juan Carlos Paz electroacoustic music prize 1995 granted by the Fondo Nacional de las Artes, Argentina.
The basic aim while composing these pieces was continuing the exploration of sound materials of ethnic and art rock music. As copyright laws are not concerned with large musical structures (forms) or very small ones (sounds), Jorge Sad worked in the border zone in which a sound or small group of sounds are still recognizable as belonging to a particular style, composer or player but is integrated in a totally different musical context.
An algorithmic-composition inspired piece for percussion quartet. The title of the composition derives from responsoria prolixia (great responsories) which are a prominent feature of Matins in the Office. The performers play a large role in shaping the form of the piece. There is always one performer at any given time who leads the other three through the composition which consists of 20 phrases of between three to eight beats. The leader informs the other performers of which phrase will be played next by sounding a unique two-beat rhythmic pattern. Two beats before the end of the phrase, the leader again chooses a new phrase, etc. A leader can pass-off their leadership role to another performer by playing a specific pattern.
There are five levels of phrases in the composition. The first phrase level contains 3 beats, and then each additional phrase level adds another beat. Each time a leader gains control of the composition, he/she may select phrases from the next higher level; for example, if a performer has just become leader for the second time, that performer may choose to play any of the phrases in levels 1 and 2. Once the original leader has become leader for the sixth time, that performer may choose to end the composition with one of three cadences.
Bernd Hannes Sollfelner
A computer-generated tape piece, premiered at Stanford University, April 16, 1995, performed in Vienna, Austria, 1995-1996.
This seven minute composition was generated from a one minute digital recording of improvised piano. Using Bill Schottstaedt's snd program in the linux environment, I stretched and pitch shifted the sample many times using granular synthesis until I arrived at a sound much more timbrally rich than the original solo piano.
Kotoka Suzuki
A collaboration work for computer generated tape, dance, and pre-programmed robotic lights. Performed at Little Theater, Stanford California: Roble Dance Studio, Stanford, California.
Marco Trevisani
Nine compositions for prepared piano, drum, real time sound processing, computer generated sounds on tape. Performed by Marco Trevisani, Lukas Ligeti, Nick Porcaro, Michael Edwards.
For prepared piano, prepared guitar and computer generated sounds on tape . Performed by Marco Trevisani, Davide Rocchesso and Michael Edwards.
A computer theatre/music production for actors, singer, live electronic and computer triggered sounds/mixing with ArtiMix. Based on a Pirandello's play.
A computer generated tape composition, inspired by Bruno Maderna's Aura.
Marek Zoffaj
This composition is based on the combination of samples of human breathing, as a basic sign of human life, with music sequences and live soprano sequences. There is a very simple time line, where each ``physical'' gesture of the real world - a breath - is a source or an anticipation for the musical event. This dialog, which can literally be described as a contact of the real and imaginary world along two planes of the human mind, was the initial idea that started the work on this piece.
Voice was first realized in the Experimental Electroacoustic Studio of the Slovak Radio in Bratislava and performed in Bourges (1997) and Bratislava (CEM 1997). It is now ``under reconstruction'' at CCRMA.
CCRMA Overview ©2000 CCRMA, Stanford University. All Rights Reserved. |