Difference between revisions of "Ambisonic Theater"

From CCRMA Wiki
Jump to: navigation, search
(Project Summary)
(Project Summary)
Line 1: Line 1:
 
== Project Summary ==
 
== Project Summary ==
 +
  written by Jason Sadural (jsadural@ccrma.stanford.edu)
 +
  comments and suggestions always welcomed
 +
 
In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space.  The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties.  Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable).  The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reverberent properties.  In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically.  The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.
 
In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space.  The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties.  Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable).  The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reverberent properties.  In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically.  The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.

Revision as of 18:18, 15 January 2007

Project Summary

 written by Jason Sadural (jsadural@ccrma.stanford.edu)
 comments and suggestions always welcomed

In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space. The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties. Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable). The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reverberent properties. In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically. The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.