Capturing, Visualizing and Recreating Spatial Sound (Ramani Duraiswami, Univ. of Maryland)
And the following Wednesday (August 3rd at 4PM), DeLiang Wang from Ohio State will be talking about his work on auditory scene analysis. More details on this talk to follow next week. But put it on your calendar now.
Who: Ramani Duraiswami (Univ. of Maryland)
What: Capturing, Visualizing and Recreating Spatial Sound
When: Monday July 25th at 4PM
Where: CCRMA Seminar Room (Top Floor of the Knoll)
Bring your 3d sound perception system to CCRMA and we'll talk about the best ways to tickle it!
Title: Capturing, Visualizing and Recreating Spatial Sound
Speaker: Ramani Duraiswami
Department of Computer Science, University of Maryland;
and VisiSonics Corporation
The sound field at a point contains information on the spatial origin of the sound, and humans use this information in making sense of the environment. When we hear sound, that sound is filtered by interaction with the environment and our bodies. This process endows the sound with cues that are then decoded by the neural system to perceive the world auditorily in three dimensions. To capture and reproduce this directional information in the sound we need a spatial representation of the sound, and a means to capture and manipulate the sound in this representation. We have explored two classical mathematical physics based representations of directional sound - in terms of spherical wave functions and in terms of plane wave expansions. We have developed spherical microphone arrays that allow the captured sound to be represented directly in these basis.
Plane-wave beamforming allows the sound-field at a point to be visualized as an image, much as a video camera images the light-field at a given point. The registration of the audio images with visual images allows a new way to perform audio-visual scene analysis. Several examples are presented at http://goo.gl/igflH
The captured sound can be used to recreate spatial sound scenes over headphones that allows perception of the original scene. For the reproduction, our approach incorporates individualized HRTFs (measured via a novel reciprocal technique), room modeling, and tracking.
(joint work with Adam O'Donovan, Dmitry Zotkin and Nail A. Gumerov)
Ramani Duraiswami is a member of the faculty of the department of computer science at the University of Maryland, College Park. He has broad research interests in a number of areas including scientific computing, spatial audio, machine learning and computer vision. He has a Ph.D. from Johns Hopkins and a B.Tech. from IIT Bombay. See www.umiacs.umd.edu/~ramani for more.