Expression with Algorithms at Los Andes

Juan Reyes
Cynthia Lawson
M O X - Computación Avanzada en Ingeniería
Departamento de Artes
jreyes@uniandes.edu.co
lawson@colomsat.net.co



Contents:




Abstract

This article will describe another point of view to the problem of expression on non real-time software synthesis programs like Common Lisp Music or Csound through Computer Assisted Composition heuristics. Our objectives focus on compositional procedures and to be able to express musically through the use of algorithms and functions. We base our work on the assumption that machine composition is a good alternative for composers and musicians in Colombia. This paper shows our results from working with this subject, during previous years at Los Andes.



Introduction

The problem of obtaining a high degree of dexterity relies on a lack of instrumental technique by most Colombian performers. This is, in part, because music is learned from generation to generation, far from western systematic and academic techniques. For this reason our research group on Computer Music at Los Andes, has focused its efforts on researching and providing tools to newer generations of musicians who will not require a high level of technical and manual dexterity with traditional instruments. We have worked with software synthesis programs that require a certain amount of computer literacy in order to achieve a minimum degree of control with basic synthesized sound. By working with computer aided composition packages like Common Music (Taube, 1991) and Patchwork (Rueda, 1993), we have realized that users can gain a higher degree of expression through traditional algorithms or objects connecting each other to produce musically meaningful compositions. We assume that computers can render a piece successfully, but at this time, our attention turns more toward how can we successfully render an expressive performance of a computer piece (O'Modrain, 1997).

Assumptions Around Musical Expression

We base our work on the assumption that music expression is developed with a traditional pre-established system of writing music in order to narrate an anecdote through the use of sounds. From this we can talk of a motive which is conceived as the shortest meaningful melodic segment. Then there is the phrase which is the union of several motives. Above them, there is the period constituted as the sequence of several phrases within a single goal. Goals might create tensions or distensions according to a set of performance rules. It is also important to distinguish the musical event or the sound object for tape compositions as an expressive pulse with a hierarchy of patterns of regular deviations from nominal time and dynamic values (Clynes,1983). Consequently, we can talk of a system resembling speech, with a grammar of phrases: one being a question that creates tension resolved by the answer of the next. In this context, we also assume that music and expression lie in the time domain. This gives us the opportunity to assert that "musical gesture" is also a function of time. Most of these rules are also based on the segmentation rules provided by GTTM of Lerdahl and Jackendoff (Lerdahl and Jackendoff, 1983).



Musical Gesture and Real time Performance

For us, performance is the carrier of musical gestures. It behaves like a wave in the sense that it conveys to the listener or the performer the necessary information to perceive expressiveness. In the case of traditional physical instruments, there is the advantage of a pre-existing connection between haptics and music which translate into the performer/instrument interaction. The translation of a gesture into sound involves two sensorial modes: how the musician perceives the sound, and how the musician relies on its mechanical response for information regarding the results of its actions (O'Modrain, 1997). In this way, we can extract expressive parameters from a real time performance, record them and thus manipulate and mix them according to our performance rules in our composition. These parameters are usually recorded by means of MIDI sequencers or analyzing audio signals through Spectral Modeling (Serra, 1997).



Expressiveness and Tape Compositions

We regard that the character of a piece of Tape Music or a frozen performance, is achieved by applying most of the expressive force on timbre, texture, tone color and timing. The only way these sounds can exist in a musical context is on tape or recordings and therefore we do not refer to them as we refer to a musical score. Our work with Spectral Models and expression has given us some useful results on phrasing with speech sounds, phrasing on musical passages and the flute, and also on the subject of nuance with unpitched bell-like instruments.Synthesis of Electronic Music is based on the generation of sound and thus is abstract by nature. The expressiveness of an electronic sound object narrows down to its newness and uniqueness, and is due to the element of surprise. The composer gives its identity and a context on which it can exist. This heuristic gives us the advantage of making the sound object part of the composition by creating its expressive parameters. There have been many attempts to expression with electronic sounds. Jean Claude Risset (Risset, 1995) has published a catalog of examples for designing and composing electronic music objects with suggestions on how we can express with them musically, through the use of algorithms.



Gesture and The Physical Model

We have focused a significant part of our research to the synthesis of electronic sounds and in particular to Physical Modeling (Borin et al, 1992). We have found that by working with this technique, we gain knowledge of not only parametric sound generation but also on the expression parameters needed to control the model. We can speak of two important approaches to physical models. As a first approximation, these models of musical instruments can be used to research and study the functioning of traditional instruments, and as a tool for composers. On the other hand, and which most interests us, is to take advantage of both the lack of the interpreter and flexibility of our compositional medium, the computer, to obtain sounds otherwise impossible with just the traditional instrument and a musician. Since we are modeling an instrument, a specific physical object which already exists, we gain a lot of expressiveness and flexibility by generating sounds which come from an existent source, but at the same time do not exist in a traditional approach, precisely since our performer is the computer. This means that we are not only modeling sound but also expressiveness (Marin, 1997).



Algorithms and Composition

There are very good tutorials on algorithmic composition on computer music books by Dodge and Moore. Xenakis touches the subject in most of his writings about music and mathematics (Xenakis, 1992). The classic "Technology of Computer Music " by Max Mathews starts out by explaining how to give musical meaning to sound generated by computers. Both Common Music and Patchwork have artistically oriented tutorials that invite composers to think algorithmically (Hind, 1995); therefore we will not try to give a tutorial on how to get to a certain musical gesture but rather, we will give some examples on how we have been able to obtain some musically useful gestures which have been used in several compositions. We speak of an algorithm as a "well defined" computational procedure that takes some value(s) as input, and produces some value, or a set of values as output.

Most of our algorithmic composition research has been done with Common Music. In this environment our basic unit or motive can be the start point of the composition. From there on a phrase can be constructed. By joining contrasting phrases a period might stand out. Adjacent periods create threads. Mixing or time offsetting two or more threads can form counterpoint and harmonic progression effects. In the process of motive and phrase construction, the sequence of note events can be achieved randomly or not. Randomness is an interesting parameter which can be wisely used in order to obtain theme and variations. In conjunction with software synthesis Music V packages like CLM (Common Lisp Music) or CSound, Common Music can also manipulate nuances or spectral parameters. These changes are frequently perceived as timbre manipulations which in the case of a physical model indicate a change in the level of expression of the musical phrase. This kind of situation can be easily identified in models of plucked string instruments. As more pressure is applied to the tension of the string, the amplitude of the vibration is greater. Real-time performance parameters or MIDI control files can also be imported to Common Music often creating variations in key velocities and tempo. There is not a direct connection between Spectral Modeling files or .sms files and Common Music but most of the performance information extracted frm .sms files can be manually entered through tables and functions to manipulate gesture parameters on Common Music structures.






( Figure 1 - An Algorithmic and Expressive Phrase )



Performance Rules or Constraints to the Algorithm

Phrases are constructed by a sequence of pitch or frequency values and rests or silences (see figure-1). The sequence of events can be achieved randomly or not. If there is no randomness, a cycle or pattern of repetitions in melody, rhythm or both can be obtained. Repetitions create patterns for theme and variations. Random effects are not highly desirable on time domain events such as rhythm, accelerandos and diminuendos. Longer durations accentuate the musical event. Silences of different durations are desirable after periods and between phrases. Ascending motives are more rapid than descending motives. All notes are of equal importance although one or more notes are accentuated in the repetition by a change in its amplitude and a change in its duration. Most of these rules are perceived as connectives and associations to the theme of the composition. Every one of these constraints can be controlled sequentially in Common Music with the aid of the following directives: an element in this case is a single expressive parameter. Accumulation enumerates elements by group. Cycle takes elements in a loop, last returning to first. Heap produces elements by random permutation. Palindrome makes elements forward and backward. Random enumerates elements in random order, and sequence, creates elements sequentially and sticks on the last element (Hind, 1995).

The following example illustrates a simple melody in which its interval relationship is going forward and backward. This creates an image of the interval structure which is easy to listen to and remember. In this case the only parameter of the musical object whistle that we are manipulating is pitch. We add a rest (r) to give a sense of breathing.

(algorithm whistle flute

(length 24 rhythm .2 duration .2 amplitude .5)

(setf pitch (item (notes c4 d ef f g r

in palindrome))))


Although we have a clear image of our melody, after a while it tends to be a bit homogeneous and therefore monotonous. At this point we need to include another parameter which will provide one more degree of variation giving it a new expressive parameter which is rhythm.

(algorithm whistle flute

(length 24 amplitude .5)

(setf pitch (item (notes c4 d ef f g r

in palindrome)))

(setf rhythm (item (rhythms e s s q in accumulation)))

(setf duration (* 1.02 rhythm)))


Now we need to include accents in order to create a sense of time signature and rhythmic pattern. This is given by the amplitude parameter in the last line.

(algorithm whistle flute (length 24)

(setf pitch (item (notes c4 d ef f g r

in palindrome)))

(setf rhythm (item (rhythms e s s q h

in accumulation)))

(setf duration (* 1.02 rhythm))

(setf amplitude (item (amplitudes ff f

(amplitudes mp (for items 2 4)))))


The result of this algorithm is an expressive melody in terms of interval relations, rhythmic variation and dynamics. The following example is an actual algorithm used for expressing with the physical model of the flute.

(algorithm berri flute ()

(setf freq (item (items

(pitches c4 ef d f ef g f af g bf af c5 bf4 d5 c

in rotation change (changes start '(0 1) step 2 )) r

(pitches ef5 c d bf4 c5 af4 bf g

af f g ef f d ef c4 d bf3 c4

in rotation change (change start '(0 1) step 2 )) r) :kill 13))

(setf rhythm (item (rhythms e q s in random tempo 80)))

(setf dur (* 1.02 rhythm))

(setf amplitude (item (items

(amplitudes f mf ff mp fff p in heap)))))


The melodic result of this algorithm gives us a set of ascending thirds and its variations which are repeated thirteen times. After the first sequence, with the aid of a rest, we get a conjunction connective with a set of descending thirds and its variations, also thirteen times more. After this process we obtain more disjunctives that reveal true values of the interval combination. These are recorded by the listener which in turn will create a new set of true values. The groups are rotated and their rhythms are randomly picked. The dynamics are performed by permutation. The use of randomness can also produce expectation connectives. With this we achieved a high degree of variation of the expressive parameters with the goals of composing an understandable melodic object which changes in time avoiding any kind of monotony. The idea behind this melody was to create instead of theme and variations, a place, or soundscape where a flutist would be practicing the instrument. Although reverb can be programmed in this algorithm, with the aid of a mixing program, reverberation and panning were added to get the sound object move around.



Conclusions

Through our work we have found a great deal of control to expressiveness parameters by using algorithms. The scope of this control can be narrowed down to the sampling rate of the audio signal and can be extended to its bandwidth. As for the quest of mathematical knowledge, there have been compositions which do not use sophisticated functions and show very good results. This kind of work can benefit composers so that their work can be modular, starting from a simple expression to complex mathematical functions. The use of statistics and probability to create natural behavior in a musical nuance is of great importance. The learning curve on this side tends to be half steep, but has also shown to be rewarding.If a composer is up to overcome the barrier of programming languages, we can assure that these models have a great deal of output flexibility. Each model has to be well specified, with many parameters which will control the expressiveness of the sound generated. In the case of instruments, once we have a model well defined algorithmically, then we can go on to the most fascinating characteristic of physical modeling: the opportunity to work with the building block of sound, the wave.

In the case of the physical model of the flute, we have been able to experience the factors mentioned above. Developing a physical model with waveguides was a challenging and fascinating experience because we realized how many factors we had to take into account when we defined a musical instrument. Then, composing with this model and other models like plucked strings (Jaffe and Smith, 1983), we obtained greater flexibility, by knowing which parameters we could handle. In the Bb flute we were able to isolate embouchure, attack, the body, and the holes. From our expressive perspective we found that the most interesting, and most difficult part to model were the tone-holes, and especially the transition between one note and the next. But since a physical model on a computer does not have maximums nor minimums, but has an infinite lungs capacity, a supernatural speed on its fingers, we lived a meaningful and highly interesting compositional experience. We have come to the conclusion that the technical difficulties may be an obstacle for some musicians. But above all these problems, it is important to understand, as Curtis Roads states that, " ...computer sound synthesis is the bridge between that which can be imagined and that which can be heard" (Roads, 1996).



References

Borin, G., De Poli, G., and A. Sarti., 1992. "Algorithms and Structures for Synthesis Using Physical Models". Computer Music Journal 16(4):pp 30-42/ Cambridge Mass. MIT Press.

Clynes, M. 1983. "Expressive Microstructure in Music, link to Living Qualities". In J. Sundberg (ed.),Studies of Musical Performance, (Vol 39). Royal Swedish Academy of Music. Stockholm, Sweeden.

Hind, N. 1995 "Common Music and CLM Tutorials", on http://www.ccrma.stanford.edu/pub/lisp/tutorials/. CCRMA. Stanford. CA. Stanford University.

Jaffe, D. and J. Smith. 1983. " Extensions of the Karplus-Strong Plucked String Algorithm". Computer Music Journal 7(2): pp 56-69. Cambridge Mass. MIT Press.

Lerdahl, Fred, Ray Jackendoff. 1983. "A Generative Theory of Tonal Grammar". Cambridge Mass. MIT Press.

Marin, Teresa, J. Paradiso. 1997."The Digital Baton: A vesatile Performance Instrument". Proceedings of ICMC-97 pp 313 - 316. Thessaloniki, Greece.

O'Modhrain, Maura Sile. 1997. "Feel The Music: Narration in Touch and Sound". Proceedings of ICMC-97 pp 321 -324. Thessaloniki. Greece.

Picialli A., and C. Roads, editors.1998, "Musical Signal Processing". Swets and Zeitlinger.

Risset, J.C. 1995. "My 1969 Sound Catalogue: Looking back from 1992". Reprinted in "The Historical CD of Digital Sound Synthesis". Mainz, Germany. Schott Wergo GmbH.

Roads, C. ed. 1996. "The Computer Music Tutorial", Cambridge, Mass. MIT Press.

Rueda, Camilo. 1993, "Patchwork Programming Guide", Paris. France. IRCAM

Taube, H. 1991. "Common Music: A Music Composition Language in Common Lisp and CLOS". Computer Music Journal 15(2): pp 21-32. Cambridge Mass. MIT Press.

Serra, Xavier. 1997. "Musical Sound Modeling with Sinusoids plus Noise". In G. D. Poli, A. Xenakis, I.. 1992. "Formalized Music", Revised Edition, New York: Pendragon Press.



Back to Top