Compose a short 'musical radio play' (1-2 min) consisting of sounds spatialized into four channels which can be listened to binaurally via your web page (with headphones). Place an HTML file hw2.html that links to your code and sound files which are in your /Library/Web/220a/hw2/ subdirectory. Work in quad, convert the final version to ambisonic format and submit it as a web page which plays it via an ambisonic-to-binaural decoder. We'll use OmniTone for that. Make sure that your submission is timestamped on the Homework Factory.
Binaural recording uses stereo mics pointed outwards from inside your ears to capture as closely as possible the exact sound pressure waves entering your ear canals. A binaural radio play produced by the BBC, The Revenge, demonstrates the possibilities. The binaural technique captures filter (transfer function) differences caused by body parts shadowing and reflecting sounds arriving from various directions: the ear flaps (pinnae), head, shoulders, etc. Played back over headphones or earbuds, binaural preserves the interaural loudness difference (ILD) and interaural time difference (ITD) cues which are basic to sound localization.
Early work in binaural recording was accompanied by predictions that its superior imaging would create a world where everyone would eventually listen through headphones. Playing binaurally-encoded sounds over stereo loudspeakers doesn't result in either good binaural or good stereo and that's one thing holding back wider use. For a position paper, see Jens Blauert's AES Heyser Lecture. He makes a provocative case for binaural as a part of an increasingly realistic synthetic world.
One artist whose work leverages the medium is Janet Cardiff. She composes site-specific 3D audio narratives with spine-tingling interplay of real and phantom presences binaurally-produced. Her telephonecall from SFMOMA 2001 is a benchmark piece which opens up the possibilities of what you might expect to compose with mobile devices. The composition led participants through the gallery, each directed by a camcorder which they were holding while watching with a pre-recorded self-guided tour. You'd turn a corner and someone would be there singing in the space (convincingly, so you could point to them) only they weren't there then, but at some other point in time, past, alternative present, or future.
Our approach starts with composing spatialized sound for the quad speaker arrangement. Pick a short text which might be a monologue, group dialog, or whatever you want, but it should constitute some sort of a script (feel free to write it from scratch). You'll first record your own voice (with your new mic) and then possibly other voices in combination depending on the text to be read.
Here are two tutorial examples of spatialization - one will simulate rain falling throughout the stereo field, and the other will give you a template by which you can 'move' a sound from one point in stereo space to another. Walk through this tutorial. Its second example will require you to use either this sound file (.aiff format) or the same thing (.wav format). It depends what you write in the .read("choose_whichever_name_here")
function call of the SndBuf object that you instantiate to read in your input soundfile. Replace "choose_whichever_name_here"
with its proper path and name.