FFT-based Real-time Tracking
Music220A, November 19, 2013
I built this piece from six sections as follows:
For the free sections I synthesized sound based on an audio recording
I made at the coast of waves rushing through rocks at low tide. I
mapped the RMS amplitude of the FFT onto a threshold for
triggering notes and the centroid of the FFT onto the note pitch.
The function that does this takes several parameters that control
the threshold levels and frequency shift of the notes. By
sporking that same function 5 times (rather than just the 2
sections required) with different parameters, I
obtained an entire set of sections which have some interesting
synchrony properties. At fast attacks of the sound (such as when
waves initially hit), all of the sections are triggered
simultaneously. The frequency scaling is such that each one plays
notes a fifth higher than the prior giving them a chordal
quality. However, as the wave progresses, the thresholds cause
them to trigger at slightly different times creating some
arpeggio. Also, since the trigger times are different, the base
pitches differ causing them to deviate slightly from the fixed
intervals between the pitches.
Also, the different sections are placed in different locations in the
Binaural space and the original file is added in as well
For the rhythm sections, I took the first 25s of Pink Floyd's "Money"
and time-stretched it to 50s in Audacity. I then used the left and
right channels of that as separate inputs to the two rhythm section
CHucK modules, since the original material also has distinct left and
right channels. These used the Shakers instrument also driven by an FFT
based tracker. The two sections are also spatially separated.
For this part I used the recording nature-12.wav which was
manually labelled as follows:
I took this labelling and divided it into 2 sections: wind noises
and bird sounds (other than the continuous chirping in the
background). For each of these I used them as a vocal script
compressed to 1 minute where I simply read the labels at the
corresponding times. These readings were processed by two additional
chuck modules based on the waveTracker above. Some of the parameters
were changed (such as number of voices, thresholds, pitch shift), a
band-pass filter centered around 1kHz was added to the input, and the
instrument was changed to a PercFlut. The notes were mapped such
that the wind readings resulted in lower pitched notes, whereas the
birds were higher pitched.
The modules in this program are:
- The free section generator, which creates a sound output
programmatically using waves.wav as
input. Its output (with the input file on the left channel) is free.wav
- Scored section generator that takes wind transcript as input.
Its output (with the input file on the left channel) is scored1.wav
- Scored section generator that takes birds transcript as
input. Its output is scored2.wav
- Rhythm section generator that takes moneyL.wav as input. Its output is rhythm1.wav
- Rhythm section generator that takes moneyR.wav as input. Its output is rhythm2.wav
- The parameter smoother, as provided, but broken out into a separate file.
- Binaural mapping from 4-channel
- Stereo recorder
The entire score can be constructed by running:
chuck Smooth.ck Binaural4.ck wavesTracker.ck:1 Scored1.ck Scored2.ck Rhythm1.ck Rhythm2.ck Record.ck:hw5-final.wav