Description: I made 3 really useless interactive ml systems lmaoooo, here's more info about the assignment: https://ccrma.stanford.edu/wiki/356-winter-2023/hw3
Here, I use FaceOSC to control the pitch and gain of a very specific audio file of someone screaming. This is done by using my face, specifically my mouth. The bigger my mouth, the louder and more high pitch the sound is. The audio file in question is a particularly popular meme in internet culture, so it's really funny to see that it has managed to be repurposed as a musical instrument. FaceOSC sends information on my mouth keypoints to wekinator, which then sends back pitch and gain calculations to ChucK.
Here, I use VisionOSC's face pose information to control the use of the vine boom sound. Each time I raise my eyebrows, the vine sound plays. VisionOSC sends info to wekinator, which sends info to my ChucK script on the probability that I am raising my eyebrows. If it reaches above 0.5, then I play the vine sound. I added control flow to make sure that the vine sound only plays again once the theshold reaches below 0.5 again; that way once my eyebrow is raised the sound doesn't keep playing repeatedly.
For this wekination, which is my main one, I was inspired by the theremin, and I was wondering to myself, what if the theremin used a camera? And what if it was used to simulate a different instrument? So that's exactly what I did here. I run the flute through PoleZero and JCRev before the DAC. Various flute parameters are controlled by Wekinator, which takes inputs from VisionOSC's hand pose information. The flute parameters in question are frequency, jetDelay, jetReflection, endReflection, noiseGain, vibratoFreq, vibratoGain, and pressure.
Thanks to Ge for creating the starter code, which was definitely useful for me to get my systems up and running. And thanks to the internet/Youtube for providing the audio files for the scream and vine boom sounds :)