by Andrew Lee
Music 356: Music and AIAssignment Description
Here's the whinny laptop that I made for homework 3. This model takes in camera input and produces window-swiping-like sounds in response to motion. By providing different movements, this model responds with quickly-changing sounds that mimics a computer screaming, whinning, or being sad. Basically, this model personifies your laptop.
My focus on part B is to compose a musical piece collaboratively with an audio mosaic tool. To do this, I trained an audio mosaic tool on a short, simple excerpt of chords. Then, when I start running the program, I preset the temporal settings of my piece. The piece begins with my playing a phrase of music on piano. Then, the mosaic generates music in response to what I just played. And then it's my turn again, and so on. So ultimately, this is a back-and-forth improvised compositional process that allows both of us to collaboratively write a piece together. To make things sound decent, I had to add a lot of restrictions to the mosaic, so currently it only produces a narrow set of sounds. But were I to further develop this project, I plan on expanding its capabilities to work well for different keys and styles, and be able to generate melodies with more variability.
Here's a baroque-sounding piece that I wrote with the mosaic.
Wekinator: Acknowledgements to Rebecca Fiebrink for providing the Wekinator application used in part a.
Code: Acknowledgements to Ge and Yikai for providing the source code used in part b.
So far, I've been fascinated by homework 2 the most, so my extension project is based on the audio mosaic. Last time, I used my mosaic as a "tool" to help me create music. While my music statement appeared like a duet between me and the mosaic, I had a lot of control of what sounds I wanted out of the mosaic. In other words, it was more like me playing two instruments at once: piano and mosaic. So I wanted to try something different. Because we discussed so much in class about AI as an oracle versus a tool, this time I tried to view my audio mosaic with something in between: a partner. There's nothing more interactive than collaboration, so I came up with the idea of collaborative music composition. Inspired by the switch-painting art challenge (two artists swap their paintings with each other every 5 minutes until they completed both paintings), I wanted to try a music version of this challenge with my mosaic tool, switching roles on who's "composing" the piece every phrase.
To be honest, I'm not too pleased with the quality of music made by my mosaic. I had to make a specific database file for it to train on, and I'd have to play specific styles of music for it to create something that goes well with it. Also, it's not really able to create a "melody" right now. But I think that the underlying concept really does have potential. As a music composer, I can imagine myself seeing some feature like this in a notation software, perhaps called the "Swap Challenge Music-Writing Mode", and being completely enthralled. I wonder what new types of music this could lead to? How would something like this inspire new music-making processes?
Anyhow, I can't believe that this is the last assignment already. This class was genuinely mind-blowing. Beyond all the cool music-AI stuff we've made, I've never imagine an AI class to allow me to think so deeply and profoundly about the social and cultural implications of technology. For once, I got to critically question "how" things are. Specifically, how things are taught to us, how the industry approaches AI, and how society treats it. I really wish that there would be more CS classes that encourage me, and the rest of us, to ponder about these things. Because after all, this is about putting our future on the line.