Capturing, Visualizing and Recreating Spatial Sound (Ramani Duraiswami, Univ. of Maryland)

Date: 
Mon, 07/25/2011 - 4:00pm - 5:00pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
Next Monday (July 25th at 4PM), Ramani Duraiswami from the University of Maryland will be summarizing the work he and his colleagues have been doing on 3d sound.  Ramani's colleague Adam was the star of the 2010 Neuromorphic workshop with his 3d camera and microphone array.  It was pretty cool to hear and see a real-time audio and visual spotlight.  As far as I can tell, this is the ultimate 3D capture device.  Ramani will be talking about how to capture 3d sound fields, visualize them and recreate them.  All good things.

And the following Wednesday (August 3rd at 4PM), DeLiang Wang from Ohio State will be talking about his work on auditory scene analysis.  More details on this talk to follow next week.  But put it on  your calendar now.

But first...
    Who:    Ramani Duraiswami (Univ. of Maryland)
    What:    Capturing, Visualizing and Recreating Spatial Sound
    When:    Monday July 25th at 4PM
    Where:    CCRMA Seminar Room (Top Floor of the Knoll)

Bring your 3d sound perception system to CCRMA and we'll talk about the best ways to tickle it!

- Malcolm



Title: Capturing, Visualizing and Recreating Spatial Sound
Speaker: Ramani Duraiswami
Department of Computer Science, University of Maryland;
and VisiSonics Corporation

The sound field at a point contains information on the spatial origin of the sound, and humans use this information in making sense of the environment. When we hear sound, that sound is filtered by interaction with the environment and our bodies. This process endows the sound with cues that are then decoded by the neural system to perceive the world auditorily in three dimensions. To capture and reproduce this directional information in the sound we need a spatial representation of the sound, and a means to capture and manipulate the sound in this representation.  We have explored two classical mathematical physics based representations of directional sound - in terms of spherical wave functions and in terms of plane wave expansions. We have developed spherical microphone arrays that allow the captured sound to be represented directly in these  basis. 

Plane-wave beamforming allows the sound-field at a point to be visualized as an image, much as a video camera images the light-field at a given point. The registration of the audio images with visual images allows a new way to perform audio-visual scene analysis. Several examples are presented at http://goo.gl/igflH

The captured sound can be used to recreate spatial sound scenes over headphones  that allows perception of the original scene. For the reproduction, our approach incorporates individualized HRTFs (measured via a novel reciprocal technique), room modeling, and tracking.

(joint work with Adam O'Donovan, Dmitry Zotkin and Nail A. Gumerov)

Bio:
Ramani Duraiswami is a member of the faculty of the department of computer science at the University of Maryland, College Park. He has broad research interests in a number of areas including scientific computing, spatial audio, machine learning and computer vision. He has a Ph.D. from Johns Hopkins and a B.Tech. from IIT Bombay. See www.umiacs.umd.edu/~ramani for more.
FREE
Open to the Public
Syndicate content