Date:
Fri, 02/02/2018 - 10:30am - 12:00pm
Location:
CCRMA Seminar Room
Event Type:
Hearing Seminar
There are many ways to interpret EEG signals, but the newest approach eschew averaging and perform single-shot decoding. This allows for real-time information about how a subject is perceiving a sound. How does one connect arbitrary audio to the resulting EEG signals? Or how does one predict the audio that resulted in a particular EEG signal? Can I tell you the sound you are attending to? I'll be talking about recent work using linear and correlation methods to build better models that connect audition and EEG signals. This a form of system identification, and has been shown to work for decoding which of two audio signals one is attending to. Cool stuff.
Who: Malcolm Slaney (Google and CCRMA)
What: EEG Decoding Algorithms and Results
When: Friday February 2nd at 10:30AM
Where: CCRMA Seminar Room at Stanford
This is work in conjunction with Alain de Chevigne, Daniel Wong, Søren Fuglsang, Jens Hjortkjær, Enea Ceolini, Enea Ceolini, Giovanni Di Liberto, and Edmund Lalor, based on work we started at the Telluride Neuromorphic Engineering Cognition Workshop. I also want to take the opportunity to thank BrainVision for their support of the Telluride workshop.