Reading Response #4:
to Artful Design • Chapter 4: Programmability & Sound Design

Noah B.
Music 256A

The ideas that really hooked me with this chapter were the attributes of code that make it a unique medium – code being simultaneously the tool and the canvas, the blueprint and the product, the score and the realization. Having less facility with coding as a go-to medium, I have always seen coding as something more like a laborious necessity. Pure function, neccessary, sometimes painful, but mostly a means to an end.

With the tools we use in music, so often it's less about the functionally "best" or most efficient option, and more about what new paths of interaction/expression it opens to the performer. This is the notion of affordance. This extends to other musical interactions as well. Instruments, of course, lend themselves to certain types of sounds and not others. You can't easily gliss on a piano, for example. Analogue vs. digital often comes down more to the feeling of interaction, more than an objective measurement of quality. Analogue equipment – for listening or sound producing – often just 'feels' different to use, because it functions differently, has different quirks, etc. 

With graphical elements, for example with programs like Max MSP, it's a little easier to see how function and design go hand in hand. Depending on the UI elements and how they're arranged, the musician can interact with the program in totally novel ways. With physical interfaces connected to the computer, it's even more apparent how affordance allows for certain types of interactions and limits others. With 'pure code', the affordances and design choices can sometimes be harder to see – but this chapter opened my eyes to some of the ways that code itself can represent a design philosopy.

A second idea, again fundamental, was the idea of programming sound as programming time. I had learned ChucK as a "highly timed" programming language, but hadn't quite understood what that entailed. Seeing that it takes the audio stream as it's basic timing mechanism makes tons of sense. I'm still curious how a computer keeps track of time to time those audio samples. How precise are the 44,100 measurements per second, are they always 1/44.1k or is there some variation? How novel can we get with 'programming time'? What philosophies of time are baked into a design of time like the one in ChucK? How do we account for non-measurable perceptions of time passing, and the complex ways that music affects our sense of time?