Chapter 5

I found this chapter to be pretty dense to get through and absorb all of the detailed principles on interfacing interaction input with audio output. I think they will be very useful though and well timed for the upcoming audio synthesizer project. I am curious for the example involving Lisa, the modified accordion, how expert technique in that case differs from emulating an existing instrument. The form factor and method for playing seemed very reminiscent of an actual accordion, but the output enabled by small modifications to the form generated a different sound. But without the breakdown in this textbook of all the detailed intricacies of the instrument, I would have perceived it as an electronic version of an accordian. How does a musician/designer balance retaining the familiar with the line between leaning too hard on existing conventions?

On a more general note, I thought it was also interesting that Perry brought up holistically evaluating actually playing these newly created musical instruments. My past year has been spent working with the tools of usability testing in the context of human centered design. It's been a constant observation how differently research studies are run in academic versus industry contexts and how little crossover there seems to be in terms of sharing convention or process. The chapter starts to touch on the potential for holistically evaluating playing each instrument, but doesn’t go any further, so I am curious if any usability studies have been run with real users. If any have, was qualitative data taken that was integrated and iterated upon with the instruments? So many of the instrument examples in the chapter provided seem so intricate that I want to understand better how the learning curve for playing the newly designed instrument differs from the designer of the instrument itself, to computer music enthusiast, to everyday casual enjoyer of music.