by Andrew Lee
Music 356: Music and AIAssignment Description
This model uses the xy-square input program provided by Wekinator to generate sound of 4 different Chuck instruments. Each corner of the xy-plane will correspond to a main instrument out of the 4. So for example, if you're at the bottom right corner, you'll hear a flute much more clearly than the other 3 instruments. This model creates a diverse set of cool sounds based on how you move the square!
This model takes in camera input and produces window-swiping-like sounds in response to motion. By providing different movements, this model responds with quickly-changing audio that mimics a computer screaming, whinning, or being sad. Basically, this model personifies your laptop.
For a more practical gadget, I made a music player that detects the user's facial expressions to determine what type of music to play. Right now, the model can detect plain, happy, and sad moods, and will play music from corresponding mood-tailored playlists from a provided library of music. For this model, I'd like to especially thank Pixaboy for providing the free music used in the model's database of songs.
Code: Acknowledgements to Ge and Yikai for providing source input and output code used in this assignment.
Wekinator: Acknowledgements to Rebecca Fiebrink for providing the Wekinator application used in this assignment.
This assignment was definitely a lot of fun. I created some utterly useless things, but I just couldn't stop messing around with them. Why can't we have assignments like this in my other CS classes (rolling eyes emoji)?
I started off with Whiny Laptop, and originally this was not the direction I was going for at all. I first wanted to make a cool "air violin," where my hand's position can control the pitch and velocity of the violin synth that's being played by the computer. I tried a lot of different ways to feed data using different webcam inputs, by none seemed to work well. So instead I looked at what I had and realized the system had its own perks that I could use, which led to Whiny Laptop. My friend and I got a good laugh out of it, so I'd say it was a success.
Another friend of mine inspired to make the multi-instrumenter, where you can control different instruments based on how you move your mouse. After training the model, I was pretty fascinated by the range of sounds it could create. Although the instruments I used were all traditional orchestral instrument sounds, putting them together created some very cool mechanical, robotic, digital sounds! My friends and I played around with it and it was much a lot more fun than I'd expect.
I wanted my last system to be something more practical and useful in a real-world context, so I decided to go with a mood-based music player. I was surprised by how effectively Wekinator performed on recognizing my moods: plain, happy, and sad. I only provided one example for each gesture, yet it was able to identify them so well after training. I think it's from this tool that I really realized how "a little" data can still be very powerful.