This is an audiovisual piece / expressive tool that transforms a generative animation through musical input. It takes MIDI output from the Yamaha Disklavier and sends it to ChucK, where the data is processed, smoothed, and packaged into OSC messages containing information about the pitch classes, individual notes, note velocities, and temporal decay. These OSC messages are sent to Wekinator as a feature vector of length 100, and fed into a neural network with 3 outputs. I trained each output on a different musical dimension. I deliberately chose abstract, subjective characteristics of music that would be otherwise difficult to parameterize: "tension", "brightness", and "density". I trained each dimension separately by improvising naturalistic passages of music that varied in those subjective dimensions. This generated a fairly large number of training data, with around 12000 training examples per dimension. After training was complete, I sent the model's output to TouchDesigner via OSC. In TouchDesigner, I built a generative art system inspired by a couple of excellent creators on YouTube (see here). I did some additional mathematical transformations of the Wekinator output, and mapped to different parameters of the visual system, including factors affecting shape, color, and structure. I spent a long time fine-tuning these parameters and adding training data to refine the sensitivity of my model to subtle expressive changes in musical input.
This was not what I initially intended to do. I originally wanted to approach this idea of interactive ML-based generative art from the other direction: starting with some non-musical input, like gesture or drawing, and transform it into music by mapping using ML. However, I wasn't really excited about the directions I was going. I was reminded of discussions from this class, in particular when a classmate asked some variation of "how come we are making AI do all the fun things like art and music, and we do the hard stuff?" I realized that, as a musician, the creative expression that I find joy in is...through music. I have spent over two decades cultivating the skills and experience needed to enter a "flow state" and freely and authentically express myself through my instrument. Why make an AI do that part for me? I instead flipped my model around--I would supply the musical data, and my AI system would help bring out new artistic dimensions from my music. I wanted a tool that could react to subtle shifts in mood and color within my improvisation on the piano. Most of all, I wanted to build a tool for artistic expression that couldn't exist without the underlying technology. I was really happy with how this piece turned out, was surprised at how simple it was, in a way, to design and implement. If I had not used AI, it would have been painstakingly difficult to try and come up with some complex computational model to analyze "tension", "brightness", and "density" as musical parameters--I don't even know where I would begin. But with AI, I could have fun just playing solo improv like I normally do, and let my AI system learn what makes me feel those qualities in the music. This felt like a perfect place for AI to share the load. Everything else, both my real-time musical input and the visual art being produced, was fundamentally a product of human artistic decisions. However, the computational "guts" of the system were vastly streamlined by using AI to map these ephemeral qualities to single numbers that I could use to nudge the visual output in an easily iterable way.
Overall, this was an incredibly rewarding experience and, I feel, a fitting final project for this class. In designing, failing, improving, and successfully implementing this project, I felt like I was strongly influenced by the ideas that were brought up by everyone in this class, teachers and classmates alike. This felt like one of the times where I felt truly creative/expressive in my coding--it allowed me to experience some of that familiar "creative flow" that I normally only feel when playing music that I deeply connect with. It makes me excited for the future--I have proved to myself, in a small way, that it is possible to weave AI into music and art without compromising the humanity at the core.
Thank you to Ge Wang, Yikai Lee, my classmates in 356 and CCRMA, and Bileam Tschepe on YouTube. Extra shout out for all the tech companies out there working with AI and art who are doing it WRONG, who showed me what not to do and inspired me to look for something better.