Final Project: glacial sky

This is Michael Svolos's submission for Music 220b's final project.

Files:

Glacial Sky (audio + video, 3.9 GB) Glacial Sky (audio + video, 428 MB) Audio only gran.ck run.ck record_stereo.ck

My final project is called “glacial sky”. I recorded myself playing individual held notes on tuba, tenor sax, clarinet, and piccolo. These notes range from E1 to Eb7. I then took my granular synthesis code from hw3, stripped it down a bit, and made it into its own class using Chubgraph. I had to strip it down because I intend to play many or all of these notes at once, which uses a lot of computer resources (and necessitates running all my code in --silent and recording it). I am using granular synthesis techniques instead of just playing the sound files because I want to be able to hold one note for a long time, which granular synthesis does really well. I used these notes to create a piece that explores the timbral and frequency range of these four instruments in a way that a physical ensemble couldn’t. My piece is organized into four “gestures”, punctuated by brief solo-like passages by each instrument. Each of these gestures mediates tension in a different way - one of them, as heard in my milestone, stacks the instruments on different intervals which expand throughout the gesture; one of them shifts from stacked fifths to fifths in octaves; one of them swaps which instruments are playing a set of notes; and one of them plays all the notes at once.

As I thought about how to create the visual aspect of this project, I thought about narrative and the idea (I can’t remember the real term for this…) that some music tells a very distinct story, some music could vaguely be correlated to a story, and some music doesn’t tell a story at all and exists independently of other “meaning”. They talk about this on Fantasia. I felt that this piece existed in that third realm, of music that doesn’t tell a story, so I didn’t want the visuals I’d play with the audio to add much of a narrative to the piece; instead, they should augment or guide the audience through the listening experience. I was also interested in incorporating existing footage into the visuals instead of synthesizing my own, which is similar to how I used recorded samples instead of computer-based synthesis. So, I took to a Creative Commons video archive and downloaded clips that I thought had the potential to match the mood or affect of each section of the piece. As a nod to the piece’s title, I did use several videos of the sky or of clouds, which I like a lot. I’m pleased with how the visuals ended up; I think that, in the same abstract way that the music does, these videos have values of or changes in tension that are able to be mapped successfully to those same moments in the piece.

Technically, this piece was developed in a sort of two-pronged way, since I was making changes to the Chubgraph class and composing parts of the song at the same time. The biggest two challenges for me were figuring out panning, which was solved by writing to two files and combining them in Audacity, and dealing with the time it took to render each sound file. Since there were so many instances of the Gran class happening at once, as well as some heavy reverb, I had to wait 5 to 10 minutes before listening to a new change. In a way, this made each change feel more weighty, since I had to invest time into hearing its result.

To run the code, place gran.ck, run.ck, record_stereo.ck, and the four files of samples (tuba, sax, clarinet, piccolo) in the same directory and run “chuck record_stereo.ck gran.ck run.ck —silent).