Christine Evers on Embodied Audio
Date:
Tue, 04/09/2024 - 10:30am - 12:00pm
Location:
CCRMA Seminar Room
Event Type:
Hearing Seminar At next **Tuesday’s** Hearing Seminar, Prof. Christine Evers from Southhampton will be talking about embodied audio, or how to teach (noisy) robots how to hear. The last time I saw Dr. Evers, she won a best presentation award, and I’m looking forward to hearing her perspective on how to help our machine (overloads :-) hear better.
Who: Prof. Christine Evers (Southampton)
What: Auditory Perception for Robots
When: Tuesday April 9 at 10:30AM <<< Note Tuesday not Friday
Where: CCRMA Library, top floor of the Knoll at Stanford
Why: Robots are everywhere, and we’d like them to understand the auditory world.
We’re not quite to the point where you can take a robot car to CCRMA, but soon. Come to CCRMA to hear how to help robots hear better.
- Malcolm
Auditory Perception for Robots
Robots are embodied, autonomous systems that co-exist, interact with, and assist humans. Audition – the ability to hear – enables intuitive human-robot interaction as well as situational awareness. Acoustic signals encapsulate a wealth of semantic information, including cues about sound events outside of the field-of-view of a robot’s cameras. However, in practice, multiple, intermittently active, and competing sound events are compressed into a single waveform per microphone. In addition, auditory perception is further destabilised by a robot’s bodily motions (‘ego-motion’). As a consequence, audio data is subject to severe uncertainty, e.g., in the temporal onsets / endpoints and spatial locations of overlapping events.
To leverage audio data for robot perception, new machine learning models are required that can quantify, leverage, and mitigate uncertainty. In this talk, Dr Christine Evers will discuss her work on machine listening and Bayesian learning. The talk will provide an overview of the challenges affecting auditory perception of robots in dynamic, acoustic scenes, followed by a discussion of recent advances at the intersection of acoustic signal processing and machine learning that equip robots with the ability to make sense of life in sound. The seminar will conclude with an outline of open challenges and future opportunities.
Biography:
Christine Evers is an Associate Professor in the School of Electronics and Computer Science (ECS) at the University of Southampton. She specialises in Machine Listening, with a focus on Bayesian methods for uncertainty quantification in decision-making. Her research is located on the intersection of robotics, machine learning, and acoustic signal processing.
She is the Director of the recently created ECS Centre for Robotics, Principal Investigator (PI) of the EPSRC-funded research programme "Active AudiTiOn for Robots (ActivATOR)", and a Co-Investigator (Co-I) on the EPSRC project "Challenges in Immersive Audio Technology (CIAT)", the UKRI Trustworthy Autonomous Systems Hub, and the UKRI Centre for Doctoral Training in Machine Intelligence for Nano- Electronic Devices and Systems (MINDS).
FREE
Open to the Public