Josh McDermott (MIT) on New Models of Human Hearing via Machine Learning
Date:
Thu, 10/12/2023 - 10:30am - 12:00pm
Location:
CCRMA Seminar Room
Event Type:
Hearing Seminar But perhaps we can do better by ignoring the details and modeling the auditory system as a black box, via a deep neural network (DNN). We can train the model using data from psychoacoustic tests. Ignoring details like the basilar membrane transmission line, and inner and outer hair cells, and all sorts of brain structures, can a DNN provide a good enough model? Can we use these models to design auditory prosthetics?
Note! This seminar is on **Thursday** at 10:30 to accommodate Josh’s travel.
Who: Prof. Josh McDermott (MIT)
What: New Models of Human Hearing via Machine Learning
When: **Thursday*, October 12th at 10:30AM
Where: CCRMA Seminar Room, Top Floor of the Knoll at Stanford
Why: Do all models provide insight???
Prof. Josh McDermott has a large research group at MIT and have looked at a lot of interesting problems, including pitch and reverb perception. What can we learn from these new deep models?
See you at CCRMA on Thursday at 10:30AM!!!
- Malcolm
New Models of Human Hearing via Machine Learning
Prof. Josh McDermott, MIT
Humans derive an enormous amount of information about the world from sound. This talk will describe our recent efforts to leverage contemporary machine learning to build neural network models of our auditory abilities and their instantiation in the brain. Such models have enabled a qualitative step forward in our ability to account for real-world auditory behavior and illuminate function within the auditory cortex. They also open the door to new approaches for designing auditory prosthetics and understanding their effect on behavioral abilities.
Josh McDermott is a perceptual scientist studying sound and hearing in the Department of Brain and Cognitive Sciences at MIT, where he heads the Laboratory for Computational Audition. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. His long-term goal is to understand human hearing in computational terms and to use this understanding to help people hear better.
FREE
Open to the Public