How do the devices we use determine the style of interaction? A series of exercises introduce sensors, circuits, micro-controllers, communication and music synthesis. The focus will be on human performance issues such as haptics, latency, bandwidth, etc. The project involves inventing and implementing an innovative sonic interaction. Performing music can be an "expressive" activity and we hope to bring the lessons learned to the broader field of "physical interaction design" for future computer-based products.
- Atmel AVR Microcontroller
- AVRmini development board
- AVR Programming in C with avr-gcc and avrlib
- Sensors, circuits, electronics
- Sound synthesis and processing with Pd
- Inter-Device Communication with serial, USB, OSC
- Makes sound
- Hopefully uses some of the tools that we will use in the labs
"With the possibility of making music with tape, composers and others began to be concerned with the place of this music in the great trajectory of Western European art music composition: how did this new music fit in with the masterpieces of the past?"
--Taylor, Strange sounds : music, technology & culture
We all know what new technology is, but what is “new” about new music?
- repertoire of sounds (“great opening up of music to all sounds” – Chadabe)
- rhythmic, melodic, harmonic complexity
- algorithms, processes
- aesthetic movements
- cultural roles
- performance scenarios
How do these interplay with technology?
- Technology is an enabler of many of these. Historically, currently, and especially with computers.
In many ways, the answer to Taylor’s question is that it didn’t – tape music, and pre-rendered electronic music has not exactly been accepted as an extension of the tradition of Western music. It has become its own genre.
But, interactivity brought new promise as the next stage along the trajectory. I argue that this is because it enables new sonic/timbral, rhythmic, melodic, harmonic complexity; new algorithmic and aesthetic choices; maybe even new forms; but also allows us to preserve cultural roles and performance scenarios. I also argue that this is ludicrous, and denies possibly the greatest richness that interactive technology has to offer.
Interactive technology also raises a lot of questions. Some of these are of course dependent on the paradigm or performance scenario.
- Skill: who can use it?
- Learning: how long will it take to learn?
- Expression: can the user develop a model of the sound space and achieve a desired sound? is this important?
- Composition / Notation: can you compose for it? can you give someone else a piece to play?
- Improvisation: repeatable but instant response. Who is in control?
- Intuitiveness: is the mapping clear for the performer? for the audience?