Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

Gopal Anumanchipalli (UCB) - Neural computations in Humans for Speech

Date: 
Fri, 11/18/2022 - 10:30am - 12:00pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
Perhaps the holy grail of auditory neuroscience is understanding how our brains process speech. I’m happy that Prof. Gopala Anumanchipalli (UC Berkeley) is coming to the Hearing Seminar this week to talk about his latest decoding work.  Gopala and his colleagues at UCSF have done some of the most amazing brain decoding work using ECoG (Electrocorticography) to measure brain activity using electrode arrays on the surface of human brains. Gopala’s earlier work looked at directly decoding speech from these recordings placed directly on the surface of the cortex. Language models help fill in the missing gaps. 

His latest work looks at the correspondence between brain signals and different layers in an unsupervised model of speech. What can DNNs and real cortical networks tell us about how to process speech?

Who: Gopala Anumanchipalli (UCB and UCSF)
What: Dissecting neural computations of the human auditory pathway using deep neural networks for speech
When: Friday November 18th at 10:30AM
Where: CCRMA Seminar Room (top floor, behind the elevator)
Why: We want to understand how speech is processed.

See this paper for more information: https://www.biorxiv.org/content/10.1101/2022.03.14.484195v1
 
This is also a good opportunity to learn what you can tell from the brain if you can get electrodes inside the skull. (Not always a good idea, but there are patients with seizures who need these electrode arrays, and can be asked if they want to do auditory experiments while waiting for a diagnosis.)

Come to CCRMA and we’ll talk about how you and DNNs process speech.

- Malcolm



Dissecting neural computations of the human auditory pathway using deep neural networks for speech
Prof. Gopala Anumanchipalli (UCB and UCSF)

Abstract: 

The human auditory system extracts rich linguistic abstractions from the speech signal. Traditional approaches to understand this complex process have used classical linear feature encoding models, with limited success. Artificial neural networks have recently achieved remarkable speech recognition performance and offer potential alternative computational models of speech processing. We used the speech representations learned by state-of-the-art deep neural network (DNN) models to investigate neural coding across the ascending auditory pathway from the peripheral auditory nerve to auditory speech cortex. We found that representations in hierarchical layers of the DNN correlated well to neural activity throughout the ascending auditory system. Unsupervised speech models achieve the optimal neural correlations among all models evaluated. Deeper DNN layers with context-dependent computations were essential for populations of high order auditory cortex encoding, and the computations were aligned to phonemic and syllabic context structures in speech. Accordingly, DNN models trained on a specific language (English or Mandarin) predicted cortical responses in native speakers of each language. These results reveal convergence between representations learned in DNN models and the biological auditory pathway and provide new approaches to modeling neural coding in the auditory cortex.

Bio: Gopala Anumanchipalli is an Assistant Professor in Electrical Engineering and Computer Sciences at UC Berkeley and in the Dept. of Neurosurgery at UC San Francisco. He directs the Berkeley Speech Group where the research lies at the intersection of Spoken Language Processing, Speech Neuroscience and Machine Learning. His group is developing bio-inspired algorithms for spoken language AI and Brain-Computer Interfaces to restore communication in paralyzed populations.
FREE
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2022
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Winter Quarter 2023

101 Introduction to Creating Electronic Sound
158/258D Musical Acoustics
220B Compositional Algorithms, Psychoacoustics, and Computational Music
222 Sound in Space
250C Interaction - Intermedia - Immersion
251 Psychophysics and Music Cognition
253 Symbolic Musical Information
264 Musical Engagement
285 Intermedia Lab
319 Research Seminar on Computational Models of Sound
320B Introduction to Audio Signal Processing Part II: Digital Filters
356 Music and AI
422 Perceptual Audio Coding
451B Neuroscience of Auditory Perception and Music Cognition II: Neural Oscillations

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams