How to wekinate "Assume Control": run both ChucK files, the Wekinator project, and have fun with a Bluetooth-connected Xbox controller!
Shout out the drivers in macOS Ventura! They got it right.
How to wekinate "(Pod)casting Doubt": run both ChucK files, the Wekinator project, and FaceOSC.
How to wekinate "Head Band": run both ChucK files and the Wekinator project, then send Muse2 OSC data from the Mind Monitor app to the relay port.
This felt like the convergence of the wekinations. I wanted to bring the Xbox controller into the equation again (having had it dormant for so long!), and had a little bit of trouble hooking it up to Wekinator, but when I finally did it instantly became a tool for play. This seriously makes me redefine the notion of an instrument in a way that no CCRMA project has done before. There aren't scales or tonality in the traditional sense, but still a sense of distinct pitch that you can kind of walk along if you find the right combination of inputs for it. I went for a little hike and did some work hooking it all together but it turned out super great, enjoy!
I spent a little time brainstorming an interesting application of FaceOSC and Wekinator, and I landed upon modulating an excerpt from the Joe Rogan Experience #1169 with Elon Musk — "Are We in a Simulated Reality?". I chose four features from FaceOSC to play with, the pose scale, the pose position, the mouth width, and the mouth height. Initially, my intuition led me to imagine directly mapping each feature to a parameter of the system, forgetting there was a neural network (Wekinator) interlinking the two. The reminder that I'd have to train by example instead of coding direct mappings made achieving a result easier, but getting the exact feature space of results I initially intended harder. I ended up relenting a little bit, giving in to how Wekinator interpolates the 1000-odd examples I provided it with at least some intention.
In order, the outputs from Wekinator modulate the (1) LPF frequency, (2) NRev mix, (3) the global rate deviation randomly applied positively or negatively to each SndBuf, (4) the global gain deviation applied to each SndBuf in the same way, and (5) the number of unmuted voices (1 to 5). I hope to offer a critique about how mass media shoves oversimplified or misguided information and speculation into our faces, as a mouth that never shuts. I encourage us to think about the kinds of voices that we platform and abide by in our daily lives.
Wow, this was totally spicy. My first wekination, titled “Head Band,” came out of a painstaking process of trying to figure out how to set up OSC between the Muse2 headband and my laptop, so I could aggregate the features for Wekinator. I ended up not using the EEG data at all, having been properly warned about how noisy it is, relying instead on a six-way combination of incoming accelerometer and gyroscope data. It works pretty well for this purpose. For the live synthesis, I adapted the given 5-parameter FM synth code to add a breakbeat loop under the low pass filter whose tempo changes dynamically.
I spent a long time tuning the feature space (how the training-time head positions and velocities mapped to synth parameters), as well as adding complexity in the form of a second modulated oscillator at an octave interval. The result is what I feel to be a fun, expressive tool, and the deployed performance element remarkably extended my impression of the work I can do as a Stanford student. This project made me realize that I need more whimsy in my day-to-day life. How fun.