This week was spent (1) developing a project idea, (2) assessing the tools I would need to make it happen, and (3) starting to set up the infrastructure. My goal for this project is to sonify listener EEG in a way that, at least theoretically, could be useful to a musician. The hope would be for audience engagement to feed back into the creative loop in some way.
In terms of planning, I think I'd like to approach the project in stages. First, using canned EEG data, which is available online from Blair's research and other music cognition projects. Second, using consumer-grade EEG headsets in a live context. And third-if time permits-using the high-quality setup in the NeuroMusic lab.
I was able to configure a Max patch to take in the Matlab EEG files and read in multiple streams of sensor data. The consumer headset (Muse) is on its way, and I am looking into the availability/accessability of the Lab's machines. For next week, I'd like to take the canned data and develop a few potential sonifications, as well as set up intake from the Muse headset once it arrives.Progress snapshot .zip
The Muse headset finally arrived, and I discovered that, contrary to advertising, the headset cannot (by itself) stream OSC to your laptop. The only way to make that transfer is through a $15 app called Muse Monitor, which will connect with your headset via Bluetooth and then stream to OSC. The technology is a bit finnicky, but it clearly registers electrical activity as demonstrated by eye movements and alpha waves.
I've been working on two modes of operation through Max. (1) A canned analysis mode, which lets you send data to Octave over OSC to compute the inter-subject correlations and derive a plot. (2) A live analysis mode, which lets you compare a live Muse recording with a historical one. At the moment, I am attempting to synch them up so that we are comparing comparable moments in the stimulus presentation.
I was finally able to get into the meat of things in implementing the Max modes that I concepted the week before. I also have been working on implementing the ISC computations found in the Dmochowski paper (Correlated Components of Ongoing EEG Point to Emotionally Laden Attention – A Possible Marker of Engagement?).
A good deal of this week was spent figuring out whether it really made sense to continue working in Max/MSP. The utilities for recieving OSC messages in Octave are bare-bones, and I do not think it is worth my time continuing to figure out how to send large matrices reliably over OSC. At Matt's suggestion, I spent time this week porting my tools into Python using python-osc and Oct2Py. They appear to work well, after a bit of software troubleshooting. My main issue right now is that there is still uncertainty about whether the measure of engagement I've developed is identical to the one presented in the paper, and whether I should explore alternate measures like spectral coherence.