Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

Decoding Speech and Language Representations from the Brain

Date: 
Fri, 02/21/2020 - 10:30am - 12:00pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
Gopala Anumanchipalli will be talking about his work to measure and reconstruct speech signals from human neural activity. Gopala is uniquely situated to do this work, having graduated from Prof. Alan Black’s speech synthesis lab at CMU and he is now part of Prof. Eddie Chang’s excellent ECoG lab at UCSF. At UCSF there are patients who are undergoing surgery for epilepsy, and some of the patients allow recordings to be made as they listen and respond to sounds. One important question is how is speech represented in the brain. Gopala’s work uses deep neural networks (DNNs) to reconstruct speech sounds from human ECoG data. Speech from spikes.

Abstract
Spoken communication is basic to who we are. Neurological conditions that result in loss of speech can be devastating for affected patients. This talk will summarize recent efforts in decoding neural activity directly from the surface of the speech cortex during fluent speech production, monitored using intracranial Electrocorticography (ECoG). Decoding speech from neural activity is challenging because speaking requires very precise and rapid multi-dimensional control of vocal tract articulators. I will first describe the articulatory encoding characteristics in the speech motor cortex and compare them against other representations like the phonemes. I will then describe deep learning approaches to convert neural activity into these articulatory physiological signals that can then be transformed into audible speech acoustics or decoded to text. We show that such biomimetic strategies make optimal use of available data; generalize well across subjects, and also perform silent speech decoding. These results set a new benchmark in the development of Brain-Computer Interfaces for assistive communication in paralyzed individuals with intact cortical function.

Bio:
Gopala Anumanchipalli, PhD is an associate researcher at the Department of Neurological Surgery at University of California, San Francisco. His research is in understanding neural mechanisms of human speech production towards developing next generation Brain-Computer Interfaces. Gopala was a postdoctoral fellow at UCSF working with Edward F Chang, MD and has previously received PhD in Language and Information Technologies from Carnegie Mellon University working with Prof. Alan Black on speech synthesis.
FREE
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2022
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Winter Quarter 2023

101 Introduction to Creating Electronic Sound
158/258D Musical Acoustics
220B Compositional Algorithms, Psychoacoustics, and Computational Music
222 Sound in Space
250C Interaction - Intermedia - Immersion
251 Psychophysics and Music Cognition
253 Symbolic Musical Information
264 Musical Engagement
285 Intermedia Lab
319 Research Seminar on Computational Models of Sound
320B Introduction to Audio Signal Processing Part II: Digital Filters
356 Music and AI
422 Perceptual Audio Coding
451B Neuroscience of Auditory Perception and Music Cognition II: Neural Oscillations

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams