Narayan Sankaran on Decoding Music & Speech from M/EEG activity
Date:
Fri, 01/25/2019 - 10:30am - 1:55pm
Location:
CCRMA Seminar Room
Event Type:
Hearing Seminar I’m happy to introduce Narayan Sankaran, new to the Bay Area, who has been doing a lot of work on measuring how our brains respond to speech and music. Most importantly he is looking at how the brain distinguishes different tones and phonemes. This has been done with single-unit recordings and perhaps ECoG, but it is novel to do this with EEG. And then how do these signals get turned into meaning? Very cool work.
Who: Narayan Sankaran(UCSF)
What: Decoding Music & Speech from M/EEG activity
When: 10:30am on Friday, January 25, 2019
Where: CCRMA Seminar Room
Why: We’d really like to know how speech and music are encoded by the brain
Come to CCRMA and we’ll talk about how your brain actually recognizes sounds.
Decoding Music & Speech from M/EEG activity
Narayan Sankaran(UCSF)
In tonal-music and speech perception, continuous acoustic waveforms are mapped onto discrete hierarchically-arranged internal representations of pitch and phonemes respectively. To examine the neural dynamics underlying these transformations, we recorded time-resolved cortical activity during both music and speech listening. Decoders then attempted to classify the identity of stimuli (tones in study 1; phonemes in study 2) from their corresponding neural activity at each peristimulus time-sample. The accuracy with which stimuli can be discriminated by classifiers provides a dynamic measure of their dissimilarity in cortex, and this dissimilarity structure can be directly compared with acoustic and perceptual models. Reflecting the transformation from acoustics to meaning, we observe a temporal evolution in the representational structure of both music and speech. In music, for example, while dissimilarities between tones initially mirror their fundamental-frequency separation, distinctions beyond 200ms reflected their perceptual status within the tonal hierarchy. Thus, consistent with our listening experience, current results track the rapid dynamics with which the complex perceptual structure of tonal-music and speech emerge in cortex.
Bio:
Narayan’s PhD research at the University of Sydney examined the representational dynamics of speech and music in human cortex using M/EEG. He is currently a postdoctoral researcher in the Chang Lab within the department of neurological surgery at the University of California San Francisco (UCSF). His research uses intracranial (ECoG) recordings taken from the human temporal lobe during auditory perception in order to examine mechanisms of temporal binding in speech and music.
FREE
Open to the Public