Michael J. Wilson - Music 220a Homework 3 submission




The data sources were taken from the Time Series Data Library:


The data were all of the same length and general purpose, so they naturally mapped to three different parts. I mapped all of them to STK flute instruments. The frequency and note on velocities were triggered with the same enveloping techniques as in the example program with envelopes.

As I was playing with the program I noticed an interesting phenomenon occurs when triggering noteoff and noteon events at the sample rate. A percussive sort of beat occurs with two of the datasets, whereas one of the datasets remains more tonal. It gives an almost tribal feeling. I was quite pleased with the effect, although it does require that the program is run with a sample rate of 48kHz.

The datasets are run through a total of two times throughout the piece. The first time they are run through partially, then each finishes in sequence. Then they are run through again at an accelerated rate.

One other note - with all this processing it may be difficult to render in realtime with the binaural processing enabled. I was able to obtain a recording by running chuck in silent mode.


Answers to questions:

  1. The values of gain are squared to exaggerate volume differences. Without squaring the volume differences are not very noticeable, since we don't perceive volume linearly.
  2. MIDI keynums are used for the frequency values to quantize notes to a familiar Western scale. Since our hearing is nonlinear, if the frequencies are mapped linearly it sounds like the range of pitches is compressed.
  3. If the update interval is decreased significantly, the piece plays for a significantly shorter period of time. If it is increased significantly the piece plays for a significantly longer period of time.