Grant Bishko

Music 256a: Reading Response Chapter 4

10/17/2021

Programmability & Sound Design

I am so  glad I read this chapter prior to making the narrative component of my audio visualizer. I feel so inspired to make chucK code that sounds interesting and will make my visualizer do cool things. The idea that we can program with time is a very new concept to me, and I find it both terrifying and exciting. What does this mean exactly? Can I code stuff into the future? How far into the future? Music that plays hours from now, if no one is there to hear it, is it music? Hmm…

As a composer, this was a very interesting chapter. I come from a background of classical music and I’ve taken composition lessons through high school and my college career so far. I have written music by hand (pencil/paper) and through a virtual notation software (I’m a Sibelius user, Finale is horrible). The concept of composing music through code is very new to me but it excites me. Since I’ve learned coding only recently (in Freshman year), I never really considered it to be a way to write my own music. But I am very excited to make my own piece in chucK for my audio visualizer! We’ll see how it goes. Fingers crossed.  I am worried that the craft of composition will go away as I explore this new medium, but I think I will get the hang of it with practice.

In chapter 4, Ge talks about the THX deep note, and I found that so amusing. My a cappella group here at Stanford (Fleet Street) actually made our own version of the THX thing with our voices years before I was in the group and it’s hilarious. It sounds like a bunch of people going “EEEEEOOUUUUUUUUAAAAAAAH” ending in beautiful locked chords. It's amazing. It was cool that Ge talked through how that works in terms of code!

Finally, the comb filter is very confusing to me, but seems useful, and something that I might want to use in future compositions, as I explore writing music in this new medium. I don’t quite understand how it can add tones to a train noise, or be used to analyze sound exactly. Much like the fourier transform, I am confused about how we are able to manipulate sound in terms of arrays of information and get new sounds?? Maybe I’m missing an “intro to sound” class, but I still don’t understand what the array represents from raw audio input from a microphone. Maybe this is something I should study in my own time. Whoever is reading this, do  you have any suggestions on ways to brush up on the technicalities of what’s going on with turning sound input to spectrums/readable information? Appreciate it :)

Cool chapter this week. Excited to be done with my audio visualizer soon. Almost there!