Nima Mesgarani (UCSF) - Reconstructions of Sounds from Cortical Recordings

Fri, 10/28/2011 - 1:15pm - 2:30pm
CCRMA Seminar Room
Event Type: 
Hearing Seminar
It's a pretty audacious experiment----Neural recording of Auditory cortex from awake humans. Just what are they listening to?

Even more interesting.. Does a subject's attention change what the primary auditory cortex receives???? At some point the brain must isolate one message or the other. But would you expect this in A1? That's not very high up in the brain.

Nima Mesgarani will be at CCRMA on Friday afternoon to talk about his experiments. They have electrode arrays implanted in humans (for another good reason Nima can tell you about). And they get to record their subject's responses to an attention task. There are two talkers. Listen for a key word and report what that person says. Nima then turns these spike recordings into the most likely audio to have generated those spikes.  Do the spikes change based on the attended audio?

    Who:    Nima Mesgarani (UCSF)
    Why:    What does auditory scene analysis look like in spikes
    What:    Robust representation of attended speech in human auditory cortex
    When:    Friday October 28th at 1:15PM
    Where:    CCRMA Seminar Room

Note, the seminar room fills up fast, so if you want a seat please come early. Parking is easiest (for non-Stanford folks) at Tressidor's pay parking lot. I'm told it was full last week. (But it was probably because of Homecoming, not our fabulous speaker.)

See you at CCRMA. We won't let Nima poke *your* cortex.. but you will certainly enjoy hearing his results.

- Malcolm

Robust representation of attended speech in human auditory cortex
Nima Mesgarani (UCSF)

Humans possess a remarkable ability to attend to a single speaker’s
voice in a multi-talker background. How the auditory system manages to
extract intelligible speech under such acoustically complex and
adverse listening conditions is not known, and indeed, it is not clear
how attended speech is internally represented. Here, using
multi-electrode recordings from the cortex of patients engaged in a
listening task with two simultaneous speakers, we demonstrate that
population responses in the temporal lobe faithfully encode critical
features of attended speech: speech spectrograms reconstructed based
on cortical responses to the mixture of speakers reveal salient
spectral and temporal features of the attended speaker, as if
listening to that speaker alone. Therefore, a simple classifier
trained solely on examples of single speakers can decode both attended
words and speaker identity. We find that task performance is well
predicted by a rapid increase in attention-modulated neural
selectivity across both local single-electrode and population-level
cortical responses. These findings demonstrate that the temporal lobe
cortical representation of speech does not merely reflect the external
acoustic environment, but instead correlates to the perceptual aspects
relevant for the listener’s intended goal.

Nima Mesgarani is a postdoctoral scholar at Keck Center for
Integrative Neuroscience of University of California San Francisco. He
received his Ph.D. in electrical engineering from University of
Maryland College Park. He was a postdoctoral fellow at Center for
Speech and Language processing at Johns Hopkins University prior to
joining UCSF. His research interests include studying the
representation of speech in brain and its implications for speech
processing technologies.

Open to the Public
Syndicate content