Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]
  • [New Webmail]

Ramani Duriaswami on Creating Scientifically Valid Spatial Audio for VR and AR (Special Thursday Seminar)

Date: 
Thu, 03/23/2017 - 4:00pm - 5:30pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
The exploding popularity of augmented and virtual reality (AR and VR) have driven new interest in getting 3d sound right. This means rendering an accurate 3d sound field that fully immerses *anybody* that uses the system. Perhaps 3D audio is ready for the big time now.

I'm very happy that Ramani Duraiswami, a professor at the University fo Maryland and founder of a company called VisiSonics, will at CCRMA for a special Thursday afternoon Hearing Seminar to talk about the science of spatial sound, and its applications to real world AR and VR systems. He and his colleagues created the Audio Camera, which allows one to view the source and reflections from a real sound.  It is described in this poster: http://www.nvidia.com/content/gtc/posters/22_odonovan_audio_camera.pdf.

Who: Ramani Duraiswami
What: Creating Scientifically Valid Spatial Audio for VR and AR: Theory, Tools and Workflows.
When: Thursday March 23 at 4PM <<< Note special time and date.
Where: CCRMA Seminar Room (Top floor of the Knoll at Stanford)
Why: Because 3d sound is about to be everywhere

Come here what it takes to make 3d sound real (and profitable).

- Malcolm
P.S.  A recent blog posting and excellent video talks about what was done to make the 3d animation in "Who Framed Roger Rabbit" so realistic. I wonder what the audio equivalent might be.  I'm not convinced that we are quite so sensitive to audio cues, but I'm mindful of the fact that many people thought the original phonograph recordings were perfect :-)

http://www.theverge.com/2017/2/24/14731236/who-framed-roger-rabbit-live-...





Title: Creating Scientifically Valid Spatial Audio for VR and AR: Theory, Tools and Workflows.
Speaker: Ramani Duraiswami
Affiliation: University of Maryland and Founder, VisiSonics Corporation, College Park, MD

Abstract:
The goal of VR and AR is to immerse the user in a created world by fooling the human perceptual system into perceiving rendered objects as real. This must be done without the brain experiencing fatigue: accurate audio representation plays a crucial role in achieving this. Unlike vision with a narrow foveated field of view, human hearing covers all directions in full 3D. Spatial audio systems must provide realistic rendering of sound objects in full 3D to complement stereo visual rendering. We will describe several areas of our research, initially conducted at the University of Maryland over a decade, and since at VisiSonics, that led to the development of a robust 3D audio pipeline which includes capture, measurement, mathematical modeling, rendering and personalization. The talk will also demonstrate workflow solutions designed to enrich the audio immersion for the gaming, video post-production and capture in VR/AR.


Prof. Ramani Duraiswami a professor in the department of computer science at the University of Maryland Institute of Advanced Computer Studies (UMIACS). Prof. Duraiswami directs research at the Perceptual Interfaces and Reality Laboratory (PIRL), and has broad research interests in computer audition, computer vision, machine learning and scientific computing. Prof. Duraiswami is the founder of VisiSonics Corporation.

FREE
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2019
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • Booking Events
    • Rooms
    • System
    • Common Areas
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Winter Quarter 2018

101 Introduction to Creating Electronic Sound
192B Advanced Sound-Recording Technology
220B Compositional Algorithms, Psychoacoustics, and Computational Music
222 Sound in Space
250A Physical Interaction Design for Music
250C Interaction - Intermedia - Immersion
253 Symbolic Musical Information
319 Research Seminar on Computational Models of Sound Perception
320B Introduction to Audio Signal Processing Part II: Digital Filters
420A Signal Processing Models in Musical Acoustics
422 Perceptual Audio Coding
451C Auditory EEG Research III: Coordinated Actions and Hyper-scanning

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams