This week I did some reading, organizing, getting oriented, and figuring out how to articulate what it is I want to do.
I will attempt to do the following:
- Build a performance system that can play algorithmically generated variations based on “hard-coded” music.
- Develop synthesized sound, sampled sound, and musical content to perform with the system.
More details about this proposal are below.
Performance System Overview
I would like to build a performance system in which I can have tight control over the material that is being played when I want to and alternatively trigger material to be generated in real-time based on hard-coded variations.
Based on my background and preliminary research, I can identify a spectrum of applicable systems. On one end there is what Pachet calls the “cybernetic jam fantasy” 1 in which there is a generalized system that can listen to any input content and generate variations or improvisations from it in real-time. On the other end of the spectrum one can simply write all of the variations manually and the system will merely choose between them at random.
I don’t want my system to be on either of these extremes, but rather somewhere in the middle. I do not want to create extraneous amounts of manual variations but also do not want to create a completely robust system.
Performance System Approach
Markov Analysis Overview
My most recent algorithmic music endeavor was an exploration of applying first and second order Markov chains to melody and rhythm generation. The most impressive result was the melody generated by the second-order Markov chain, but the work involved in creating the 16 x 8 probability table was tedious and mundane.
I would like my system to analyze manually written musical variations in order to construct a probability table (as described above) that the system will then use to generate rhythms and melodies.
Integration with Ableton Live
Starting from a top-down perspective…
Ideally, the Markov analysis would occur over clips in Ableton Live, and optionally over more material in a systematically named directory. I do not know if this is feasible in practice, but it seems possible in theory. The “Max for Live” API includes a call to grab the MIDI contents of a clip, which can theoretically then be sent to the analysis module. There would then be a special “clip” in Ableton that when triggered, would simply tell the generative playback module to begin playing a generated variation.
Generative Playback Module
To generate material based on the results of the Markov analysis, I will be using SuperCollider. There are already classes for Nth order Markov models, and since I will be doing most of the synthesis in SuperCollider as well, this seems most reasonable to me.
No matter the technology, I imagine the algorithm will gather the following heuristics:
- Note duration (2nd order Markov model)
- Note frequency (2nd order Markov model)
- Note density (constant)
- This will involve defining a granularity for this set of variations, so if the granularity is 1/32, the system will determine how many notes start on 1/32 grid points, and this will be the density.
- This will be used in the playback module to determine how often to trigger a note because a decision will be made at each 1/32nd note.
Thoughts on Phrase Length
This is not taking into account phrase length, so the note density would be evenly distributed across a measure, which may be less than ideal. Maybe determine phrase length? How to do this? Maybe phrase length is not required because we’re doing this entire process at the granularity chosen by the composer (granularity of a single variation “phrase”)
Ability to Capture
An additional feature of this system, but an important one, will be the ability to save a variation once it is generated by the playback module. This will involve saving a history of generated components, and having some sort of button which will allow the playback module to “flag” items in this history for later review.
- How many clips will be needed to generate desirable level of variations?
- How long will clips need to be to have variations sound unique enough?
- Because of the heuristic nature of the Markov analysis, maybe this doesn’t matter?
Music and Sounds
This week I learned a bit more about chord progression structures. My lack of formal training leaves me very unexperienced in this area, but I found this page that simply lists the diatonic chords for a major / minor key. This is exactly the information I needed to work on building a chord progression behind a melody I found in my bin of recorded riffs.
I started working in SuperCollider to create a drum sampler that automatically maps velocity to the number of samples there are in a directory. I also started writing a “rhodes” type patch in attempt to emulate the tone found on this Telefon Tel Aviv track. It is close enough that I feel comfortable working with it further and later refining the timbre.
“Beyond the Cybernetic Jam Fantasy: The Continuator”, Computer Graphics and Applications, IEEE In Computer Graphics and Applications, IEEE, Vol. 24, No. 1. (2004), pp. 31-35 ↩