For my final project, I wanted to incorporate several themes we explored in this course: data sonification, gestural synthesis, and extensions of current WebChucK capabilities.
I have been working loosely with a NYC-based startup called Connected Future Labs for about a year, and am lucky enough to have one of their prototype Emotibit devices for my own use. The Emotibit device is a wearable biometric sensor module that measures 16 channels of physiological data, is capable of low latency, boasts medical research-grade precision and accuracy, and most importantly, is completely open-source & hackable.
Learn more about Emotibit here
So, with this tool at my disposal, I thought it would be interesting to make an instrument out of the Emotibit, allowing real-time physiological and gestural data to be sonified in an expressive, sophisticated way. The main challenge in this project was to figure out how to send the real-time output of the Emotibit to WebChucK, as there are several points of communication that made it hard to juggle and maintain good synchronization. On top of this, both the Emotibit data handling and software, and WebChucK itself are fairly (very) new and in development, which made it hard to troubleshoot. The simplest signal path would be to go directly from the Emotibit to WebChucK's OSC receiver objects. However, while I was able to build test versions of my instrument in ChucK locally, I found out that WebChucK has not integrated this yet. So the solution I came up with was:
Emotibit (via WiFi hotspot) ----OSC Output---->Node.js UDP Socket---->webpage----as global floats/ints/str --->WebChucK
I'm happy with the results! I had to fine tune a lot of values to parse the raw data and scale it to reasonable numbers to work with, but once I found the right value ranges and made some non-arbitrary operations between related data channels, I was able to control a lot of musical elements.
The mapping functions are rather complex and it would take a long time to go through how each signal relates to different parameters of different synthesizers, so pending a more exhaustive writeup I will give an example of how one data stream was used.
I used the PPG (photoplethysmogram) data, which optically tracks heart rate and bloodflow, to trigger kick and snare drums on different heart beats. This was surprisingly tricky as I had to devise a function to detect a "pulse" in the signal, without getting triggered by huge swings in magnitude due to noise or movement. However, I eventually got it to work, and it plays kick and snare sounds made in ChucK (based on the 808 kick example in class) according to drum pattern on an invisible 4/4 "grid".
Other combinations of data channels drive frequency, pitch, gain, filter cutoff, rhythmic density, and harmony of 10(?) different synth oscillators.
Unfortunately, since the project relies on the use of the Emotibit device, this is not a synthesizer anyone can just access digitally and use. It also makes it hard to "turn in" or give here...so my submission is a video of my live performance on this instrument during our final class presentation.
Click here to watch
I feel that I accomplished what I set out to do, and I'm excited to share my work with my collaborators at Emotibit. I think it would be great if the OSC handling could have worked out for WebChucK, but perhaps this is a future direction for the WebChucK development team :)
I also want to thank Chris Chafe, Marise van Zyl, and Nick Shaheed for your help, advice, and support through this quarter! I had a great time in this class.