Emily Saletan RR4

Principle 4.2 of Artful Design reads that sound is an art of change over time. The ChucK language was designed as a tool to program sounds by mapping them directly to time. I was struck by how this chapter emphasized that tools craft how we think.

I would like to tie this into the note on page 200 that programmability begets more programmability.

In computer science, in math, in history, and so forth, we continue building where other designers, engineers, and writers left off. We expand on the existing literature and then we simplify. We build out and complicate, and then we make it more efficient again. The process of iteration accordions out and in again as we use tools to explore new complexities, then develop further tools that handle these complexities under the surface.

To be fair, horn players don’t have to calculate the exact physics of their breath moving through the instrument, nor do guitar players need to know the precise distance between frets to keep the scale in tune. But there are also more restrictions on what’s possible to play in the physical world, so that mathematical knowledge wouldn’t necessarily be useful. However, if these musicians wanted to change the length of a horn, or add extra frets to a guitar, understanding the inner workings would be essential. Computer music has been framed as a new realm of exploring what were previously impossible sounds: the expanding of the instrument. This accordioning of building out, then wrapping up and tucking in, is also a push-pull that molds how users think. We are now able to accomplish more in less time and effort, due to automation. The monumental lines of code in Paul Lansky’s compositions and the original THX effect can be simulated much more easily, which changes the process of composition.

The process of composing in this form is notating instructions in a way that the computer will understand. But to the human eye, code can feel indiscernible – whether it’s someone else’s, or even your own from too long ago. I’m curious how producers of computer music think about their compositions and visualize what they want to happen.

I am reminded of a message from piano, choir, and band instructors alike – which is that often those playing music think of the score in a “vertical” manner: lining up each chord and proceeding note by note, beat by beat. This is a reflection of reading modern staff notation. Meanwhile, the audience tends to hear it somewhat “horizontally”: they perceive it more through flow, the dynamic shape of its phrases.

Because many computer music pieces could not be written down via traditional notation, I am interested in what these lines of code reflect to their composers. When there are so many moving parts being calculated, what does that translation process look like back and forth between mental aural image and code?