Homework 5: Psychoacoustics and Panning

due on 11/10 (Tue) 11:30am in the Homework Factory and in your /Library/220a/hw5/ directory

Overview

In this lab, you will complete some excercises about different kinds of panning. Your deliverables will be a .wav file of your sounds from the excercises as well as the code that created the .wav files. Place these files in your /Library/220a/hw5/ directory.

Compose a short 'musical radio play' (1-2 min) consisting of sounds spatialized into at least two channels. "Submission" entails placing an HTML file hw5.html that links to your code and .wav file in your /Library/Web/220a/ subdirectory. Make sure that your submission is timestamped on the Homework Factory.

Lab (40 points)

In this lab, you will implement two examples of spatialization - one will simulate rain falling throughout the stereo field, and the other will give you a template by which you can 'move' a sound from one point in stereo space to another. The second example will require you to use the sound file car-mono.aiff.

Walk through the following PDF: hw5-panning.pdf, and, as in previous homeworks, submit the .wav outputs and ChucK scripts you wrote.

Composition - Spatial Sound (✓-,✓,✓+)

Create a short, potentially musical radioplay (i.e. a play that only uses audio) with recorded text and various sound effects. Shoot for a duration ca. 2 minutes. You can record using your mic you built (or a different mic - though please not your computer mic!) and Audacity or a field recorder. Also, freesound.org is a great place to find sounds. The turned in file will be in stereo, intended for headphones, though you will actually be working with four channels (see the final point under 'requirements').

Requirements

  • There must be at least three sound sources that have some aspect of localization in them. These can be more atmospheric (e.g. wind that moves throughout the entirety of the radioplay), or more related directly to smaller objects of interest (e.g. voices that speak from different points in space, an instrument that has a particular 'home' or direction in space).
  • One of the sound sources must move. This can have direct meaning in the play (e.g. the lab example of the car moving from the left to the right channels, or perhaps a bee or an insect that moves), or the meaning can be more abstract (e.g. some melodic string that oscillates between the left and the right channels).
  • While you can use the lab template code, you cannot use the car and the rain directly, unless you extend them somehow and 'make them your own'.
  • You should conceptualize your radio play/creation in four channels - as if your listener had two speakers in front and two behind them, set up in a square (i.e. quad setup). You thus can spatialize sound in a horizontal plane anywhere around the listener. We will be simulating a technique, binaural audio, to then mix these spatialized channels down to stereo so that you can listen to your spatialized radio play over headphones. For technical instructions on how to utilize these four channels, and then mix them down to stereo, follow these instructions: how-binaural-mixdown.html.

Considerations

  • If you like, you can make this a purely literal radio play - with the tracks all recorded using a microphone (the one you can build in the MaxLab would be a great choice), with a focus on dialogue and more literal elements of a play (e.g. closing doors, etc.). However, you could also 'musicify' it, (e.g. a conversation between Peter Pan and Tinkerbell, where Tinkerbell is a sound/melody you synthesize - perhaps she flies around in the sound space! Or, perhaps you're simulating a rehearsal, where a director/conductor takes the center of the sound stage from one side, and rehearses a couple of instruments on different sides of the stage.) The creativity is up to you.
  • If you create/use a function that plays a sound file and spork it at different times during your radio play, you could potentially add many sound effects and elements to add sonic interest to your creation.
  • Techniques you can use include localization with interaural intensity difference (IID), interaural time difference (ITD), Schroeder-style reverb (e.g., NRev), and processing for time and pitch transposition (e.g., SndBuf rate and PitShift)

Extra Credit (10 points)

Recreate either the panning examples or your composition using the Web Audio API.

Want to use more channels? Extend your composition so that it includes multiple moving sound sources each moving among more than two channels. At submission, please flag us that you attempted this option.