MUSIC 356: Music & AI
Wekinesis: more explorations in gesture mapping with wekinator
I built upon my last project, which was a hand gesture-controlled playback remixer that warped and reversed an audio file based on different hand movements. I wanted to try to work with symbolic information instead of audio signal information. I eventually would like to use Wekinator to map gestural controls to an algorithmic composition that will play on the Disklavier. But as a preliminary step, I wanted to start with the task of having gesture drive real-time generation, rather than playback. I made some simple classes for Note and Melody objects, so that I could enter more complicated streams of music with more traditional symbolic notation. I have been learning about notation systems like SCORE, MusicXML, and Humdrum, and so I borrowed some of the representational logic for notes and rhythms. This allowed me to encode one of Bach's two part inventions much quicker than with just MIDI note numbers and ChucK durations. I mapped gestural controls to the tempo, loudness, LPF cutoff frequency, Chorus depth, and ADSR envelope. This felt more directly expressive than my last project, which was more of a live-remix tool. This felt more like an instrument that focuses just on the expressive dimensions of a performance--the notes and rhythms are pre-determined, but the variations in speed, dynamics, and timbre are decided spontaneously by the performer. My next step will be do have more of an improvisational/compositional logic that will underpin an algorithmic composition, so that the performer has an even greater role in the creative decision making. The use of AI here is hidden--it's just an intermediate step responsible for the mappings. Is this kind of musical piece "generative"? AI here acts more like a passive translator than a "creative agent". But it does allow for the use of movement and sound shaping that would otherwise be a long computationally challenging process--perhaps doing it "the long way" would stifle the creative flow that I experienced in trying out many different kinds of gestures, input mappings, and training parameters through Wekinator's rapid iteration. In any case, I feel like I'm getting closer and closer to truly satisfying use of AI in music performance--something that wouldn't be possible without AI, but still clearly centers the human spirit. For my next trick....I will attempt to play the piano telekinetically using this same general framework.