This project is oriented towards the development of new tools for algorithmic composition, based on traditional signal processing techniques. While we ordinarily associate filtering and frequency transforms with sampled audio, these methods also possess a number of desirable properties for the generation of higher-level musical materials. These techniques offer meaningful and intuitive relationships between input and output. Additionally, many such tools have ``strong parameters," where a change to a single parameter produces a substantial and observable alteration to the output. Finally, the notion of ``frequency," abstracted to rate of change, analogizes well to music. Harmonic rhythm is the most obvious example, but music in general is multitemporal; operating on a number of different time scales simultaneously, from notes and phrases to sections, movements and complete works.
The first application of these tools was in a work for violin solo titled Integrities. In this instance, the time-domain outputs of filters were mapped to musical parameters. Over the course of the piece a number of different filters were used, with particular emphasis on time-variant resonators displaying ``classic" behaviors like sweeping filter resonance or bandpass frequency. A variety of different mappings were also tested over the course of the piece, including inter-event onset times, phrase onset times, phrase durations, and pitch.
Additional work on a companion piece for cello solo concentrates on the sonification of frequency transforms. Spectrograms of speech recordings and other structured audio are the principal data source, and mappings include pitch selection, event duration, and dynamics. The third work in the series, for viola solo, applies filters to various time-domain representations of the musical materials from the violin and cello pieces. The filtered outputs serve as variations of the original music.
The problem with physical models and their appeal to composers is not merely perceptual or aesthetic. Furthermore, it is not a question of understanding the physics and parameters of the actual instrument. It is a question of achieving musical textures, realistic articulations and extending the qualities of a family of sounds given by a single characteristic timbre in a physical model of an instrument. This can be achieved by modeling and manipulating expression parameters for musical gesture. We are dealing with compositional techniques that render a computer music piece using physical models as the primary source technique for sound synthesis in non-realtime environments and parameter manipulation by means of envelopes, randomness and chaotic signals for expressiveness.
The physical model of the maraca is a very flexible algorithm for generating interesting timbres out of the percussion family of instruments. It is also well suitable for achieving musical expression in the digital synthesis of a sound. A piece called Wadi Musa (or The Monteria Hat) was composed using direct digital synthesis from the physical model of the maraca in the Common Lisp Music (clm) environment. The original physical model was developed by Perry Cook as part of his PhISM approach for using computer models of percussion instruments. The clm version is a direct transcription of the Synthesis ToolKit (STK) algorithm done by Bill Schottstaedt. A variety of parameters and algorithms were used to achieve interesting musical structures in a composition for which the ``maracas'' sound is the underlying musical element in a piece for tape and live instruments. Performance modeling follows research started also by Perry Cook, Brad Garton, Chris Chafe and others around 1995. Although physical models have not proven to be appealing to composers because of their nature of imitating real world phenomena, in this case the model of the maraca was proven to be a good tool for achieving a variety of creative goals in pursuing a new music composition. Basic parameters such as note duration and dynamic range produce a wide range of musical material and perhaps in new aesthetic in the perception of computer generated music.
|© Copyright 2005 CCRMA, Stanford University. All rights reserved.|