Hearing Seminars
CCRMA hosts a weekly Hearing seminar (aka Music 319). All areas related to perception are discussed, but the group emphasizes topics that will help us understand how the auditory system works. Speakers are drawn from the group and visitors to the Stanford area. Most attendees are graduate students, faculty, or local researchers interested in psychology, music, engineering, neurophysiology, and linguistics. Stanford students can (optionally) receive credit to attend, by enrolling in Music 319 "Research Seminar on Computational Models of Sound Perception." Meetings are usually from 10:30AM to 12:20 (or so, depending on questions) on Friday mornings in the CCRMA Seminar Room.
The current schedule is announced via a mailing list. To subscribe yourself to the mailing list, please visit https://cm-mail.stanford.edu/mailman/listinfo/hearing-seminar If you have any questions, please contact Malcolm Slaney at hearing-seminar-admin@ccrma.stanford.edu.
Upcoming Hearing Seminars
Laura Gwilliams on Decoding the Semantics of Audio in the Brain
Date:Fri, 10/06/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarFREEOpen to the PublicJosh McDermott (MIT) on Auditory Brain Models
Date:Thu, 10/12/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Details to follow.FREEOpen to the PublicKarlheinz Brandenberg - Spatial Sound - HRTFs vs. Room Reverb
Date:Fri, 10/20/2023 - 1:30pm - 3:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Note special time.FREEOpen to the PublicRobotic Hearing Systems for Autonomous Vehicles
Date:Fri, 10/27/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Details to follow.
FREEOpen to the Public
Recent Hearing Seminars
Alicia Zuckerman on emotion without audio
Date:Fri, 06/02/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
But not everybody hears audio the same way. We at CCRMA have an amazing collection of experience about how to convey audio emotion. What can you do without the audio? What are you trying to convey and what would you like to convey to people who are hard of hearing? How might you do that?FREEOpen to the PublicProf. Daibhid O Maoileidigh on Making sense of the sensory hearing cells
Date:Fri, 05/26/2023 - 10:30am - 12:00pmLocation:Seminar RoomEvent Type:Hearing Seminar
Who: Dáibhid Ó Maoiléidigh, Stanford Otolaryngology
What: Making Sense of the Sensory Hearing Cells
When: Friday May 26th at 10:30AM
Where: CCRMA Seminar Room (Top Floor at The Knoll)
Why It all starts at the cochlea and hair cellsFREEOpen to the PublicSamuel J. Yang (Google) - ML meets hearing - Clarity Enhancement Challenge
Date:Fri, 05/19/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
The Clarity Enhancement Challenge is a (successful) attempt to harness machine-learning technology to make our hearing better. The Clarity team provides data and benchmarks, and all of us get to apply our best technology to solve the problem. In past years they have offered competitions to improve hearing and to measure speech intelligibility.FreeOpen to the PublicAntje Ihlefeld - Predicting spatial audio quality for AR/VR
Date:Fri, 05/12/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarFREEOpen to the PublicAaron Master (Dolby) - DeepSpace: Dynamic Spatial and Source Cue Based Source Separation for Dialog Enhancement
Date:Fri, 04/28/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Who: Aaron Master (Dolby)
What: DeepSpace: Dynamic Spatial and Source Cue Based Source Separation for Dialog Enhancement
When: Friday April 28th, 2023 at 10:30AM
Where: CCRMA Seminar Room (Top Floor of the Knoll at Stanford)
Why: How can we improve our listening environment?FREEOpen to the PublicPrateek Verma - Fourier Transforms and Filter-Banks in the Era of Transformers and GPT
Date:Fri, 04/07/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Prateek Verma has done a large number of interesting audio ML experiments, from speech to music and many other problem areas. He’ll be talking about learning a basis for the front end.
Who: Prateek VermaFREEOpen to the PublicAI for Sound - Mark Plumbley (Surrey)
Date:Fri, 03/17/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
Who: Prof. Mark Plumbley (Surrey)
What: AI for Sound
When: Fri, 03/17/2023 - 10:30am - 12:00pm
Where: CCRMA Seminar Room
Why: AI is good for sound!FREEOpen to the PublicHannes Muesch - Speech Intelligibility
Date:Fri, 03/10/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing SeminarFREEOpen to the PublicLes Atlas (UW) - Better clipping for audio spectrogram DNNs
Date:Fri, 03/03/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
But audio has always been troublesome with these networks. What the heck do you do with that damn phase? Sometimes you can just throw it away, but if you keep it the phase doesn’t work the way that normal numbers do (like image intensity). And complex numbers aren’t any easier. Networks like TasNet avoid the phase problem by learning multiple overlapping “wavelets”.FREEOpen to the PublicLudovic Bellier - Decoding a Pink Floyd song from the human brain
Date:Fri, 02/24/2023 - 10:30am - 12:00pmLocation:CCRMA Seminar RoomEvent Type:Hearing Seminar
There has been a lot of work to decode speech signals from brain signal using intracranial EEG (ECoG), MEG and EEG. But what about music? Does the brain respond the same way? Arguably speech is easier, since it is both one-dimensional and for many studies there is a single source. In addition speech is likely to engage the motor system, providing another set of neurons from which to decode the basic speech signal. Music is more challenging: multiple acoustic objects, driving the emotional centers of the brain. What does it mean to decode music? Which parts of the brain respond with a signal we can decode in real time?
Who: Ludovic BellierFREEOpen to the Public
- 1 of 16
- ››