Our purpose is to create a methodology to understanding the maximum threshold of observable individual musical tracks that move rythmically in space. This research will eventually lead to the creation of an auditory canvas that we can distort perturb by stretching, bending, or even ripping. The products developed in this experiment is intended for developement of Astro-Sonification. The data will be collected through gui and tools will be learned by the subjects in order to accomplish tasks.
Room Configuration and Hardware
- The experiments will be conducted in the semi-anechoic listening room at CCRMA Stanford.
- 8 speakers configured horizontally equally spaced(0, 45, 90, etc...)
- 4 speakers above configured spherically above and 4 spherically below at (22.5,38; -22.5,38; etc...)
- Tascam 3200 16 channel mixer
- Linux box
- The programming environment we will use for this project will be PD and MAX/MSP (http://crca.ucsd.edu/~msp/software.html)
- Spatialization will be accomplished through Vector Base Amplitude Panning (http://www.acoustics.hut.fi/~ville/)
- GUI created through GEM (http://gem.iem.at/)
- Physical Modeling for PD (http://drpichon.free.fr/pmpd/)
Creating a simple example with a sound moving around audio sphere according to the graphical user interface to check consistency. For our sounds we will mainly use short bursts, claps, snaps, etc..
- We will use a modification of pmpd/04_3D_exemple.pd which is a sphere with a mass able to roll completely around the sphere
- We will map the location of the sphere to the spherical position controls of VBAP
- We will test for the accuracy of ovserved position along different regions of the sphere