This week I did some reading, organizing, getting oriented, and figuring out how to articulate what it is I want to do.

I will attempt to do the following:

More details about this proposal are below.

Performance System Overview

I would like to build a performance system in which I can have tight control over the material that is being played when I want to and alternatively trigger material to be generated in real-time based on hard-coded variations.

Based on my background and preliminary research, I can identify a spectrum of applicable systems. On one end there is what Pachet calls the “cybernetic jam fantasy” 1 in which there is a generalized system that can listen to any input content and generate variations or improvisations from it in real-time. On the other end of the spectrum one can simply write all of the variations manually and the system will merely choose between them at random.

I don’t want my system to be on either of these extremes, but rather somewhere in the middle. I do not want to create extraneous amounts of manual variations but also do not want to create a completely robust system.

Performance System Approach

Markov Analysis Overview

My most recent algorithmic music endeavor was an exploration of applying first and second order Markov chains to melody and rhythm generation. The most impressive result was the melody generated by the second-order Markov chain, but the work involved in creating the 16 x 8 probability table was tedious and mundane.

I would like my system to analyze manually written musical variations in order to construct a probability table (as described above) that the system will then use to generate rhythms and melodies.

Integration with Ableton Live

Starting from a top-down perspective…

Ideally, the Markov analysis would occur over clips in Ableton Live, and optionally over more material in a systematically named directory. I do not know if this is feasible in practice, but it seems possible in theory. The “Max for Live” API includes a call to grab the MIDI contents of a clip, which can theoretically then be sent to the analysis module. There would then be a special “clip” in Ableton that when triggered, would simply tell the generative playback module to begin playing a generated variation.

Generative Playback Module

To generate material based on the results of the Markov analysis, I will be using SuperCollider. There are already classes for Nth order Markov models, and since I will be doing most of the synthesis in SuperCollider as well, this seems most reasonable to me.

Analysis Module

I am not sure yet about where this analysis will take place. I will be looking into the feasibility of the Ableton Live integration (as discussed above), and see what the interface would need to be between Max and the analysis module. It seems to me that I could do this analysis in SuperCollider also, which might mean avoiding the addition of another component and potential point of failure. My initial feeling leads me to believe that it may be easier to just write python or JavaScript to do this analysis, maybe a script embedded within Max, and send the analysis data over to SuperCollider when available.

No matter the technology, I imagine the algorithm will gather the following heuristics:

Thoughts on Phrase Length

This is not taking into account phrase length, so the note density would be evenly distributed across a measure, which may be less than ideal. Maybe determine phrase length? How to do this? Maybe phrase length is not required because we’re doing this entire process at the granularity chosen by the composer (granularity of a single variation “phrase”)

Ability to Capture

An additional feature of this system, but an important one, will be the ability to save a variation once it is generated by the playback module. This will involve saving a history of generated components, and having some sort of button which will allow the playback module to “flag” items in this history for later review.


Music and Sounds

This week I learned a bit more about chord progression structures. My lack of formal training leaves me very unexperienced in this area, but I found this page that simply lists the diatonic chords for a major / minor key. This is exactly the information I needed to work on building a chord progression behind a melody I found in my bin of recorded riffs.

I started working in SuperCollider to create a drum sampler that automatically maps velocity to the number of samples there are in a directory. I also started writing a “rhodes” type patch in attempt to emulate the tone found on this Telefon Tel Aviv track. It is close enough that I feel comfortable working with it further and later refining the timbre.

  1. Beyond the Cybernetic Jam Fantasy: The Continuator”, Computer Graphics and Applications, IEEE In Computer Graphics and Applications, IEEE, Vol. 24, No. 1. (2004), pp. 31-35