Rui Wang - Recognizing Sounds

Date: 
Fri, 04/20/2012 - 1:15pm - 2:30pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
Just how do we recognize the sounds around us? For many decades the easy answer has been "a neural network."  But seriously, how?  What is it about a sound that allows us to recognize it? What does sound recognition tell us about how the brain is organized?  I think vowel recognition (and more generally speech and sound recognition) are a really interesting and hard problem. I'd like to know how the brain does it.  Perhaps that will allow us to conclusively describe what sounds are similar. 

This coming Friday at the CCRMA Hearing Seminar, Rui Wang, a postdoc here at Stanford, will be talking about her work.  She is using EEGs to tweak apart the workings of the system. While Rui is studying vowels, I hope her work (or methods) will apply to all sounds, musical or speech.

    Who:    Rui Wang (Stanford)
    Why:     Sound recognition is fundamental
    What:    Recognizing Vowels (and Timbre)
    When:    Friday April 20 at 1:15PM
    Where:    CCRMA Seminar Room (Top Floor of the Knoll)

Bring your favorite phoneme recognizer to CCRMA.  We'll have a grand time discussing speech (and timbre)!

- Malcolm


Abstract:

How the human brain processes phonemes has been a subject of interest for linguists and neuroscientists for a long time. Electroencephalography (EEG) offers a promising tool to observe neural activities of phoneme perception in the brain, thanks to its high temporal resolution, low cost and non-invasiveness. The studies on Mismatch Negativity (MMN) effects in EEG activities in 1990s suggested the existence of a language-specific central phoneme representation in the brain. Recent findings on magnetoencephalograph (MEG) of phonemes also implied that the brain encodes the complex acoustic-phonetic information of speech into the representations of phonological features before the lexical information is retrieved. However, very little success has been reported in classifying the brain activities associated with phoneme processing, which should be the fundamental of developing a Brain-Computer Interface (BCI) based on the natural human language.

In this talk, I introduce a model for recognizing the averaged EEG recordings of phonemes. The results of recognizing eight consonants or four vowels using only the phase pattern of EEG recordings are discussed. Furthermore, the qualitative analysis of the similarities between the EEG representations, derived from the confusion matrices, illustrates the invariance of brain and perceptual representation of phonemes. Inspired by the strong connection between similarities of EEG representation of phonemes and distinctive features, I propose a recognition model based on distinctive features which can be easily extended for recognizing more phonemes.  The success in recognizing brain phoneme representations using distinctive features also opens up new opportunities to study the neural mechanism of distinctive features in phoneme perception.

Rui Wang received her B.S and M.S degrees from Tsinghua University, Beijing, China.  She received the Ph.D. degree in Electrical Engineering at Stanford University in 2011. She is working at Suppes Brain Lab, as Research Associate, on statistical signal processing methods for recognizing brainwaves of speech stimuli. She is also interested in speech signal processing and automatic speech recognition.                                     
FREE
Open to the Public
Syndicate content