Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

Antje Ihlefeld - Predicting spatial audio quality for AR/VR

Date: 
Fri, 05/12/2023 - 10:30am - 12:00pm
Location: 
CCRMA Seminar Room
Event Type: 
Hearing Seminar
What determines the quality of the spatial sound field in an augmented or virtual reality system? These issues are more important as AR/VR devices become more common. Part of the answer is modeling physical reality via head-related transfer functions (HRTFs) and room acoustics, and getting that right. But given a complicated spatial field, what shortcuts can we afford given how our brains parse and understand it? What can we perceive, and how might we predict the user’s reaction to a (hyper) realistic audio environment? What do our brains do with all these spatial sounds?

Antje Ihlefeld, who now leads auditory perception research at Meta Reality Labs in Redmond, will be talking about research she has done to better understand how central nervous system function affects the perception of spatial sounds. Her abstract is below.

Who: Antje Ihlefeld (Meta Reality Labs)
What: Predicting spatial audio quality for AR/VR
When Fri, 05/12/2023 - 10:30am - 12:00pm
Where: CCRMA Seminar Room
Why: How do we make augmented/virtual reality better?

Come to CCRMA, where we’ll use a real environment to describe how we interact with a complicated 3d sound field.

— Malcolm


Predicting Audio Quality for VR and AR
Achieving a sense of "presence" is a key goal for virtual and augmented reality (VR/AR). To truly immerse people in a virtual environment, audio experiences need to be indistinguishable from reality. However, achieving this level of audio quality can be a complex task, particularly when trying to balance computational complexity and the need for in-person perceptual studies. One solution is to develop computational metrics that rely on objective measures to predict spatial audio quality. Current approaches tend to rely on single-task sensory processing thresholds to gauge the amount of sensory information received at the auditory periphery. While these metrics can be useful, they only provide a limited picture of audio quality.

To develop more comprehensive audio quality metrics, it is necessary to consider central auditory processing. The thesis of this talk is that new paradigms are needed to better understand how listeners build up expectations and interact with virtual environments, in order to achieve a sense of presence. I will present behavioral evidence on how the brain processes and interprets audio information, as well as how expectations and interactions with the environment can shape auditory thresholds. Together, these findings suggest that by considering central auditory processing when developing audio quality metrics, we can gain a more comprehensive understanding of how to arrive at audio quality that is indistinguishable from reality.

Bio: Antje Ihlefeld applies principles of auditory neuroscience towards immersive AR/VR technology. Prior to joining Reality Labs at Meta as Tech Lead for Auditory Perception, she was the principal investigator in a federally funded lab that worked on restoring hearing in individuals with profound hearing loss, and a professor for biomedical engineering. Antje is passionate about driving technological advances through science and maintains close ties with higher education. She is a visiting professor at the Neuroscience Institute at Carnegie Mellon University.

Suggested reading: https://www.nature.com/articles/s41598-021-00328-0

FREE
Open to the Public
  • Calendar
  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2023
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Fall Courses at CCRMA

Music 101 Introduction to Creating Electronic Sounds
Music 192A Foundations in Sound Recording Technology
Music 201 CCRMA Colloquium
Music 220A Foundations of Computer-Generated Sound
Music 223A Composing Electronic Sound Poetry
Music 256A Music, Computing, and Design I: Software Paradigms for Computer Music
Music 319 Research Seminar on Computational Models of Sound Perception
Music 320 Introduction to Audio Signal Processing
Music 351A Research Seminar in Music Perception and Cognition I
Music 423 Graduate Research in Music Technology
Music 451A Auditory EEG Research I

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Stanford Digital Accessibility
Web Issues: webteam@ccrma
site copyright © 2009-2023
Stanford University

site design: 
Linnea A. Williams