A survey of the use of technology in musical creativity. Students will familiarize themselves with works from the repertoire, and will take advantage of CCRMA's International Digital Electroacoustical Music Archive (IDEAMA).
Music 154 is a designated 'writing in the major' course. In addition to weekly listening and analysis assignments there will be two written projects: a term paper on a historical or analytical topic, and a critical paper dealing with an aesthetic issue developed in class. One of these must be handed in as a paper, the other may be submitted as an HTML contribution to the class'
These are some of the various categories/dichotomies/juxtapositions we came up with in class while listening to Edgard Varese's Poeme electronique. The assigment, which will be due on or around April 17th, is to create your own score in whatever manner you best feel represents the piece. There is a CD of the piece in my box at CCRMA (it's a joint box with Bobby Lombardi, so it's alphabetized under "L", but my name is on it too). If you remove the CD from my box, please leave a note with your name, when you took it, and your email address so that others can contact you if necessary. I will make more copies of the CD soon.
There's a RealAudio site for Poeme ... awful sound quality ... also an au file (both appear to be excerpts).
As I mentioned in class, we'll be jumping back and forth in time a bit at the beginning of the quarter ... here's a timeline that might help put things in a little bit of perspective.
Basic types of sounds:
- Electronic sounds vs. natural sounds
- Altered sounds vs. unaltered sounds
- Pure vs. impure (related to altered vs. unaltered?)
- Simple vs. complex
- Instrument or instrument-like
- Water-like sounds (spilling, drips)
- "Clear" vs. "muted"
- Changing vs. static
- Heavy vs. light
- Specific descriptions:
- Few vs. many
- Thick vs. thin
- Near vs. far
- Left vs. right vs. center
- Specific descriptions:
- Reverb (perceived distance from source, room size)
- Soft vs. loud
- Increasing vs. decreasing vs. static
- Rapid changes
- Big vs. small
- Rhythmic vs. non-rhythmic
- High vs. low
- Continuous vs. discrete
We jumped ahead in time a bit to discuss a true pioneer in electro-acoustic music, Morton Subotnick. He has been extremely active in all aspects of electronic music, from composition to the development of instruments and interactive programs (Interactor). His career can also be traced through his website, www.mortonsubotnick.com.
We listened to a number of pieces by Subotnick, including:
Silver Apples of the Moon -- The first piece commissioned specifically for a recorded medium. (Excerpt) Trembling -- for violin, piano, and "ghost" electronics Jacob's Room The Key to Songs All My Hummingbirds Have Alibis
We also mentioned Subotnick's interest in educating youths about contemporary music; visit www.creatingmusic.com to play with his online interactive program, and also check out Making Music, the CD-ROM version of the program.
We watched the movie "Theremin: An Electronic Odyssey" (available for viewing in the Media and Microtexts area of Green, in the basement).
More info on the theremin and on Leon Theremin:
We also mentioned the Rhythmicon:
And the Telharmonium:
And the Terpsitone:
The last few classes we've gone over the Poeme electronique assignments and discussed some general historical trends and concepts in electronic music.
We began by talking about the concept of nationalism in music, and we made a distinction between German and French "styles." Professor Berger mentioned the art exhibit he saw where paintings were arranged geographically, and we discussed some of the differences in art from France, Northern Germany, and Southern Germany. For a comparison, he mentioned Monet's haystack studies and compared them to art from Northern Germany which tends to be much darker.
We then talked about concepts of nationalism in music of the Soviet Union. We mentioned composers such as Rachmaninoff, Prokofiev, Shostakovich, Stravinsky, and Mussorgsky as examples of active Russian composers at the turn of the 20th century (although none of them were creating electro-acoustic music).
Here's a link to a really bad MIDI generation of "Promenade" from Mussorgsky's Pictures at an Exhibition, which was mentioned in class. (Definitely not a substitute for hearing the real thing!)
We then talked about the concept of anti-nationalism or pan-nationalism, or universality in music. We listened to Max Mathews' "International Lullaby," which interpolated between folk (or folk-like) songs. We also listened to Stockhausen's Hymnen, and to Berio's "Thema (Omaggio a Joyce)."
We continued by listening to other Berio pieces; we listened to two movements of Sinfonia and to Synchronisms III. Neither work is electronic, but we talked about how the pieces were influenced by Berio's experience with electro-acoustic works and technology.
Electronics have historically been used to either create "new" sounds, or to replicate natural sounds. In the 20th century, composers of instrumental music tended to push the limits, asking performers to create sounds that were less and less "natural" and more and more influenced by technology.
We also discussed the paradox of the electronic medium in general, which is that electronics, especially digital, tend to be inherently discrete in nature -- yet one of the main uses for the electronic medium is to create smooth transformations ... from one sound to another, from one folksong to another (like the Mathews piece), from one instrument to another, and so on.
At the end of class we discussed potential topics for term papers.
We listened to Steve Reich's Come Out, Piano Phase, and witnessed an embarassingly bad attempt at Clapping Music (destroyed, as usual, by your humble teaching assistant). For another Steve Reich page, and a long interview with him, visit this site.
In Clapping Music, two performers start out together with the same rhythmic pattern, shown above. After a certain number of repetitions, determined in advance, one performer 'rotates' the rhythm, taking the first element of the rhythm -- be it note or rest -- and appending it to the end of the rhythm. The other performer maintains the original rhythm as an ostinato throughout. The rotation continues until the rhythm has gone through all of its rotations and is the same as the original pattern again.
Reich found this process fascinating, and used it in a number of pieces for live performers; he used the same idea in electronic music as well. Come Out and It's Gonna Rain were the results of experimenting with tape loops running simultaneously on different machines; due to slightly different playback rates, the loops gradually go out of phase. The result is a constantly changing sonic environment.
We also screened Forbidden Planet, a sci-fi movie scored entirely with electronic instruments, by Louis and Bebe Barron.
We discussed the mathematical properties and ramifications of the Fibonacci series, and talked about its uses in music, art, and architecture. (Definitely look at that last link ... this guy has way too much free time.) Jonathan discussed the film "The Battleship Potemkin," which uses the film technique of montage, apparently designed with the Fibonacci series and the Golden Mean in mind on various levels throughout the film.
We also talked about composers who have used the Fibonacci series; and focused on Bartok's use of the series in his compositions; we specifically mentioned Music for Strings, Piano, and Celeste. (Other link.)
John Chowning took the Fibonacci series concept to its logical end in his piece Stria, which is entirely composed, on multiple levels, using the Fibonacci series and Golden Mean calculations as compositional tools and formulae. His system was based entirely on mathematical models, even fitting his system of tonality to these ratios (instead of an octave ratio being 2:1, his octave was 1.608:1, or a Golden Mean ratio).
Historical survey of speech synthesis techniques
In the beginning, there was Bell Labs and Max Mathews. In the 1950's Bell Labs wanted to create sound synthesis techniques for communications purposes. In 1961, Max (with the assistance of a number of technicians at Bell Labs) made a computer sing "Bicycle Built for Two." (.aiff file) This was used in the movie 2001: A Space Odyssey as the voice of Hal the computer while Hal was being disconnected. This is an early example of speech synthesis technique.
There have been two models used for speech synthesis; the spectral synthesis model and the physical synthesis model. Spectral models are those based on perceptual models, while physical models are those based on production mechanisms. Examples of spectral models include vocoders, sinusoidal analysis, and John Chowning's FM synthesis of the singing voice. Examples of physical models include the early acoustic tube models (such as the one used to create Mathews piece). There are also models that are combinations of the two paradigms. Formant tracking synthesizers, while spectral, can also be considered pseudo-physical because the source/filter analogy used is very close to that of human speech. Another is Linear Predictive Coding, or LPC, which is also based on both the spectral and physical models.
John Chowning used two carrier oscillators and one modulating oscillator, all tuned in whole number multiples of the same frequency. Chowning developed a set of characteristics of the soprano singing voice which guided his synthesis design. A provision is made for vibrato with a random deviation; also a slight portamento, but only during the attack portion of the note. Chowning has observed that without the vibrato, the carriers and modulator do not form a singing pitch. This new application of FM was developed at IRCAM in 1979, and his piece Phoné was realized at CCRMA in 1980-81.
There are two types of formant analysis. Both methods are based on a prior analysis of speech; the difference lies in the way tha data is manipulated and fed back for resynthesis.
Formant synthesis by rule: previously analyzed parts of speech are recombined to make spoken sentences or phrases, using phonemes derived from research in acoustic phonetics and applying rules for connecting the phonemes into speech. Intelligible speech cannot result from simple combination of prerecorded phonemes, because the acoustic properties of phonemes are altered significantly by the context in which they occur. Programs have been developed that include consideration of the complex parameters of prosody. Although the sound is intelligible, it is clearly distinguishable from natural speech. Examples of this include the talking voice of a Macintosh computer or some automated voices (like Texas Instruments' Speak and Spell).
synthesis by analysis: the speech is digitized
and analyzed. The analysis transforms the speech signal into a
series of short-term spectral descriptions, one for each segment,
or frame. Each spectrum is then examined in sequence for its principal
peaks, or formants, creating a record of the formant frequencies
and their levels versus time. The record of change in formant
positions with time can then be used to reconstitute the speech
A frame is defined by its amplitude, its duration, and the characteristics of its formants, which are specified as characteristics of band-pass filters. These characteristics can be changed with regard to the frame's amplitude, frequency, and duration, among others. When the filter is inverted and a noise generator of some kind applied, the result is the original or manipulated speech. This type of synthesis works best on a spoken male voice. It is much less accurate on female and children's voices because the relatively high fundamental frequency makes the formants harder to track. An example of this technique is demonstrated by Charles Dodge in his 1973 composition Speech Songs. This piece is made of four short poems, read by Dodge, then digitized and altered. The first three of the poems use the technique of formant tracking by analysis; the fourth uses a technique called Linear Predictive Coding, or LPC.
Linear Predictive Coding (LPC)
is similar in some respects to formant tracking. The speech source
is digitized and broken into frames, just as in formant tracking.
Where in formant tracking the frame is defined as characteristics
of band-pass filters, in LPC the resonances are described by the
coefficients for an all-pole filter. The value of a given frame
is predicted by taking a linear combination of the previous N
samples. For each frame, the coefficient is determined that will
give the best prediction throughout the frame. This is determined
mathematically, with the error leftover called the residual.
If the residual is used as an excitation source for the all-zero inverse of the all-pole filter, the result is the original sound source. As in formant tracking, however, the effectiveness of LPC as a compositional tool emerges from its ability to modify parameters of each frame before resynthesis. Some modifications available include durational elements, amplitude elements, frequency elements, and many more. In addition, LPC allows analysis of singing voices and the voices of women and children. Paul Lansky's Idle Chatter, from 1985, uses the LPC technique, as well as granular synthesis and stochastic compositional processes. Charles Dodge used this technique on his Any Resemblance is Purely Coincidental, which also used a technique of source separation on a 1907 recording of Leoncavallo's aria Ridi Pagliacci to separate Enrico Caruso's voice from the instrumental accompaniment. Dodge then manipulated the voice, creating new contours and often, multiple copies of the voice singing in counterpoint with one another.
These descriptions of FM, formant tracking and LPC leave open many questions. If you are interested in the techniques of speech synthesis, the best place to start would be Charles Dodge's book Computer Music, where he gives a basic but technical overview of these and other techniques and provides ample references. (Apparently the third edition of Computer Music eliminated some discussion of Dodge's Any Resemblance and Speech Songs; for information specifically on those pieces, consult the previous edition.) Other places to look would be Max Mathews' Current Directions in Computer Music Research, and Curtis Roads' Foundations of Computer Music and The Computer Music Tutorial.
Pieces listened to in class: Max Mathews Bicycle, John Chowning's Phone, Charles Dodge's Any Resemblance, Chris Chafe's In a Word, and Paul Lansky's As it Grew Dark.
We screened The Battleship Potemkin (available in the Media center in Green).
We've talked about the French/German stylistic differences; now we try to establish whether or not there are differences between the East and West coast. Listened to some pieces by Jim Tenney, Milton Babbitt's Philomel, and Mort Subotnick's Wild Bull.
We talked about the influence of popular music on concert music, and listened to the Kronos Quartet's version of Jimi Hendrix's Purple Haze, Michael Dougherty' Elvis-inspired piece, and pieces by George Crumb, including parts of Ancient Voices for Children and Black Angels.