Decoding Imagined Sounds with EEG

Date: 
Fri, 10/04/2013 - 11:00am - 12:30pm
Location: 
Seminar Room
Event Type: 
Hearing Seminar
I’d like to talk about some EEG experiments we did at the Telluride Neuromorphic Engineering Workshop this last summer. We were interested in seeing whether we could measure and characterize the response to imagined sounds. This is a form of top-down signal, important for how we understand the complicated world around us. I’ll talk about the motivation for our work, the experiment, the preliminary results, and where we go next.

This is very preliminary work, no final results, so I expect this will evolve into a general discussion of EEG and auditory perception.

Who: Malcolm Slaney (Microsoft Research)
What: Decoding (Single Trial) Imagined Sounds with EEG
When: Friday October 4 at 11AM <<<< Note new regular time!!!
Where: CCRMA Seminar Room (Top floor of the Knoll, behind the elevator)

We had a lot of fun this summer in Telluride, and got some interesting data. Let’s talk about what it means, and how to understand the bottom-up and top-down feedback paths that help us understand all the sounds around us.

---- Malcolm

Decoding (Single Trial) Imagined Sounds with EEG
Malcolm Slaney (Microsoft Research) and a cast of dozens

We are interested in both bottom-up and top-down processing of auditory signals. Much work has used electrocorticography (EEG) recordings to measure and even decode responses to perceived auditory stimuli, the bottom-up signal path. In this (pilot) study we have applied the same decoding techniques to top-down sounds. We therefore set out to decode a listener’s response to imagined sounds from EEG signals in real time, knowledge that could be incorporated into next-generation assistive listening devices that wish to help listeners perceive an attended signal.

Offline, we acquired EEG data while subjects imagined one of a small set of sounds. Because auditory signals are necessarily time-varying, synchronization is important. Subjects were cued with a visual display, before hearing the prototype sound, and again before imaging the same sound. In this abstract we report results of two speech sounds, both nursery rhymes, but we also looked at simple musical passages.

We used EEG recordings of the imagined sounds to build two different kinds of models of auditory imagination. The first model judged whether the imagined response was closer to one of two prototype imagined sounds. These judgments were based on using either autocorrelation measures of the signal, or representations of the signals that were enhanced using Denoising Source Separation (DSS). We looked at both support-vector machine (SVM) and nearest-neighbor (NN) classifiers. The second type of model decoded the imagined speech, building a model that mapped from EEG signals into an auditory envelope (< 15Hz). These decoders were built using linear regression, from EEG signals back to the auditory envelope. From these imagined envelopes we could then judge how well they reconstructions matched the original sounds. We also measured the EEG responses to Steady-State Auditory Stimulation (SSAS) at 4, 6 and 40Hz, with the hope these sinusoidal modulations would not invoke motor areas of the brain.

EEG signals can be decoded in real time to determine to which of a small set of sounds a listener is imaging with relatively high accuracy. More work remains to be done to separate out the motor and auditory contributions. The SSAS results is a bit surprising since the SSAS response is strongest at 40Hz. Perhaps our subjects were not able to imagine such a high-frequency stimulus.

Biography
Dr. Malcolm Slaney is a principal scientist at Microsoft Research (Silicon Valley). He is a (consulting) Professor at Stanford CCRMA, where he has led the Hearing Seminar for more than 20 years, and an Affiliate Faculty in the Electrical Engineering Department at the University of Washington. He is a Fellow of the IEEE and (former) Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing and IEEE Multimedia Magazine. He has given successful tutorials at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP, and “Web-Scale Multimedia Data” at ACM Multimedia 2010. He is a coauthor, with A. C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”. This book was recently republished by SIAM in their “Classics in Applied Mathematics” Series. He is coeditor, with Steven Greenberg, of the book Computational Models of Auditory Function. Before Before Microsoft Research, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, IBM’s Almaden Research Center, and Yahoo! Research. For many years, he has lead the auditory group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s recent work is on understanding conversational speech in addition to general audio perception.

FREE
Open to the Public
Syndicate content