Music 356 Final: Escape from the Turing Trap

 

Voguing Was Created by Black and Latino LGBT Youth in the 80s, Madonna And AI Will Never Take That Away From Them

 

Description:

https://ccrma.stanford.edu/wiki/356-winter-2023/final

VisionOSC -> Chuck -> Wekinator -> MAX/MSP -> Ableton 

Above is the final system I created to recognize Vogue (voguing, vogue dancing) poses and allow the dancer to play the role of the musician as well. The nature of voguing works well with pose recognition as it focuses on lines, symmetry and precision. Featured in the video is one of my best friends Fonsi Bonilla. He not only taught me how to DJ disco house back in the day, he is a wonderful human of many trades. This video is meant to highlight the functionality and response of the system, and equally important it is meant to show the human-driven, AI-assisted performance. I am playing live in Maschine Studio on my MK3 while Fonsi is performing vocal samples live (via voguing).

 

Demo: Vogue x Music x AI





Reflection:

 

Originally in HW 3 I wanted to execute a functional pose recognition system, so I built on my previous project “AI Am Having A Breakdown in Tom’s Diner” for my final project. I will have to deploy a similar system for another audiovisual installation for an event in Summer 2023 so I’m pretty happy to get this working. Different from the previous system, I am using MAX/MSP and Ableton for music rather than ChucK due to time constraints and quantizing was easier this way. I found a lot of value in this project because suddenly I have a working understanding of the possibilities for OSC and AI. All in all this system is very limited to Voguing itself, but it does create a new interactive space and has some value to the practice.  

 

In terms of my process, it first involved just listening to music and thinking about how and why using AI in performance would be interesting. The current system is just a small scale version of what could potentially be really interesting to incorporate into live performance of electronic music and a dancer, both collaborating on visual and audio at once. In the video, I am controlling the narrative by giving Fonsi cues with my head of when to do things, but I am also tuning to his live choreography since I want him to be in a flow. The interaction between musician and dancer was no different than between bandmates. I really enjoyed this since it reminded me of when I did West African Drumming and dance–both drummers and dancers had to know the rhythm and choreography in order to send and receive cues as humans. Although I was not dancing, from work I felt that this system was also giving back to performance practice, since you adapt to this new system of how music is controlled. With more tuning and better models, I think this system could be fun to deploy at real world events.


Acknowledgements:

 

Everyday the world is polluted with thoughtless art that I have to emotionally process, like I’m eating empty carbs when they’re just a vehicle for flavors. I am grateful to have the opportunity to experience the projects ofm y peers where there is evidence that their personality is in it. It’s a beautiful thing and I hope you guys keep putting it out in the world. I’d like to thank Nick Shaheed for helping with OSC in MAX/MSP and Fonsi Bonilla for driving all the way from Oakland just to see this project through and collaborate on solutions on the dancing side of things. Thank you Professor Ge Wang for teaching an incredibly valuable experience in this class! I have come out of this with so many questions about the future of Music and AI and more optimism and opinions. I can’t wait to engage others in conversation about AI and the creative process.