-- first, do
this only once, it's sticky --
Navigate to your pd-lab directory,
start JACK and Pd in the usual way; then:
1) Go to the Pd File menu, choose Path and type in:
2) then click the "Apply" button
3) and click "Save all settings"
4) click "OK"
Test out streaming.pd from the usual Pd directory. It starts computing when opened but you need to bring up the dB level in the output object, manually.
The example is an endlessly looping series of four ascending pitches, with cycling timbres. The speed of the repeating notes begins at a note every 400 msec. Listen to the pattern for awhile and then slowly increase the speed, moving the rhythm slider to a smaller (faster) value. At what value does the texture sound like it splits into 3 descending voices (aka musical lines or parts)?
A visual analogy to what you're hearing in the Pd example:
(shapes = timbres) (colors = pitches)
Isorhythm is an old Western musical technique where complex phasing is played out between cycling patterns in different musical dimensions. In the medieval version (approx. 500 years ago), it was melodic cycles of one length vs. rhythmic cycles of another length (in terms of number of notes in the cycle). The effect generates complex patterns that have an even longer repeated cycle, as in the diagram above where the 12 objects constitute one cycle of the whole.
The effect in the Pd streaming example is to split a simple linear pattern into what sound like simultaneous layers when played fast enough, like the way you see the color pattern in the group of cones, then donuts, then the balls when the items above are closer together. In the Pd example, FM synthesis generates 3 timbres phased against 4 pitches. Tempo can be changed to increase or decrease the ability to group by timbral similarity, where temporal proximity pulls the tones into different streams, or musical voices. Pseudo-polyphony is another way to describe it.
The timbres of the FM tones alternate between:
inharmonic (multiple partials, ambiguous pitch)
What pulls the tones apart into groups is the salience of the timbral differences. Other differences can also cause groupings that dominate over the ascending pitch progression, including other timbral qualities (like register, attack, vibrato, etc.) and even loudness or spatial location.
The ear is doing its best to identify sound sources and construct plausible streams of events from possible sources. But that can be illusory. Another effect found in Western music using this streaming illusion and grouping by similarity occurs in the Baroque (250 - 300 years ago) in Bach's unaccompanied solo string music (violin Partitas, cello Suites). These pieces often sound like 3 or 4 performers at once by virtue of fast arpeggiation (broken chords), up and down in register often across the strings. The illusion comes from registral association: the ear constructs a stream from the high pitches which are in alternation with notes played in the other middle and bass registers. Like having interleaved soprano, alto, bass, etc. players when played fast.
The assignment is to create two examples of the illusion. Turn in a soundfile, hw3.wav, in which you've captured the Pd output sound again using Audacity (or Ardour -- recording tutorial if you want to try a new audio editor/tracking program). It will have 2 sections of about 15 seconds each. During each 15 second section sweep the tempo from slow (no streaming illusion) to fast (streaming illusion).
To do this synthesis for the first part, you will need to have tried out StifKarpMetro.pd, and you'll want to review FM and additive synthesis and Pd patches introduced in Pd Lab#1: Pd-sines.pdf & Pd-complexes.pdf.
1) Make the first section into an illusion based on register by modifying the streaming.pd example in two ways. Register is gross pitch range, like bass, alto, or soprano. First, replace the FM synthesis objects and use the StifKarp object to play a plucked string sound controlled by MIDI pitch (semintone keynumbers). Then, instead of cycling the FM parameters in a 3-cycle, create a register change in a 3-cycle (hi, med, lo pitch ranges). Do this by adding multiples of 12 to the keynum for each octave shift up (or subtracting 12 to go the other way). You will see that the unaltered keynums are stored in a table (as elements 1,2,3,4 in a zero-based array). The MIDI pitches are converted to frequency with the mtof object. You need the octave shift calculation to occur before conversion to frequency. To create the register-based illusion, create a new array containing 3 octave displacements (e.g., 12, 24, -12) that will cycle against 4 pitches. As in streaming.pd keep the four pitch classes themselves close together, e.g. scale steps. To reiterate, an octave = 12 semitones and the note middle-c is keynum 60.
To do this synthesis for the second part, you will need to have tried out and understood the elements of
2) The second section changes back to grouping by timbre instead of register. Where streaming.pd used FM timbres, instead use this group of 3 instruments from the Stk physical models: StifKarp, Bowed, and Clarinet. The transient instruments like StifKarp are easy to use because of their natural envelopes. Non-transient, sustainable instruments, like winds, strings, brass and so on, require a time-varying envelope be applied to the breath pressure or bow velocity. See envArrayBowed.pd and envArrayClarinet.pd for working examples. The envelopes that are stored in array objects will be saved with the file, so if you change the array contents and save, note that you will lose the previous envelope. For the assignment, you can keep them as is. Also, Pd arrays must have unique names, thus the 2 arrays in each of 2 sets means you need 4 unique array names.
Before concocting the program, think about the major areas of spaghetti that you need to wire up. A metronome / rhythm / cycle / keynum and frequency control, a StifKarp voice, a Bowed voice with envelopes, and a Clarinet with envelopes. You will bang each voice in alternation.