Music 220C Project Page: Algorithmic Counterpoint Using Markov Models
My project for this quarter is a real time system for algorithmic counterpoint. The system takes an audio input (in this case cello) and generates upper voices in good counterpoint. For the final project I am hoping to generate 2 upper voices above the cello line in real time: one in first species and another in second species. In order to do this, it may be necessary to make some assumptions regarding the input line (i.e. Tonal, Length of line) due to some of the rules relating to how things must end to remain in good counterpoint. While the real time generation of the two lines is the core of the system, I would ideally like to make it into a performance system that could be more interactive and allow the cellist to play something other than the bass line. One possible implementation of this would be to assume that the bass line continues to repeat and then allow the performer to add their own line/embellish the bass line as desired without interfering with the counterpoint engine.
06/13/2012 - Final Work for this quarter is finished. A website with more formal documentation will be up soon, but I have uploaded the code and other files in the meantime
Score containing some sample outputs 06/01/2012 - Week 8 Less work got done this week as a result of preparing for and helping with the spring concert. Still working on getting the pitch and note change detection stabilizes and implementing a pitshift to generate the output sound. I will also probably make some final tweeks to the Markov tables.
05/23/2012 - Week 7
Counterpoint Algorithm is finished and implemented. There are minor improvements that could still be made, but it works well. For the remainder of the quarter I plan to focus my efforts on the real time audio processing aspects of the system.
This week I spent a fair amount of time testing the system with a variety of cantus firmi, running each many times. In total, there were 64 runs of the system, and it only once got to a situation that it couldn't resolve a possibility for. This particular scenario (leaping to the note two notes before the end, eliminating the possibility of getting to an appropriate penultimate note while also resolving the leap) is something that could be changed to be allowed in only these dire circumstances. While allowing the melodic motion to continue in the direction of the leap is something that is not allowed in this musical style, the current default is for the system to remain on the same note when faced with no good options - potentially resulting in a clashing sonority. The preferable action would probably be to move to a consonant pitch in anticipation of the cadence.
I have also started re-incorporating the pitch tracking functions that were originally in DuetYourself. They are not fully debugged yet for use with a cello, but it should be doable. Once the pitch tracking works, I plan to use a pitch shifted version of the input to generate the output sound.
05/14/2012 - Week 6.5
I finally have most of the basic system up and running now. The same bugs in the synthesis still exist, but I have imported the Markov tables and can generate reasonable counterpoint with a few exceptions.
Counterpoint issues to be implemented/Adjusted Fixed - Add resolution of tendency tones - this is not included in the current model and definitely needs to be - Add the cadencing structure - This table has been imported but is just not being used at the moment as other issues are being debugged - Start up: There's something weird about how I am generating the state for the first pitch. Even the first note has an assigned "previous pitch" which is used to calculate pitch #2. It would be nice to not have this necessarily, and not use information about the preceding melodic table for the first interval (this shouldn't be too hard) - The solutions at the moment are very deterministic for some reason although I cant figure out why. There are also some cases where i get an error for no possible solutions although there should be some...
Synthesis/Audio Related Issues - Now that the Markov system is running, I hope to return my attention to issues of audio output and eventually input. - I am hoping to have the cello input working in time for the final presentation, but am still not sure whether this will happen or not. The issues of pitch tracking, etc have so far taken a back burner to the calculation issues. Now that things are starting to work more smoothly, this is something I want to try again.
Goals for Next Week: For next week, I would like to have an output synthesis of some form that is working and letting me hear these things. Also I hope to incorporate the cadence table and tweak some of the other issues.
05/04/2012 - Week 5
This week was mostly spent working with the C++ code to configure it to use input from the command line as midi pitches to calculate the line in counterpoint. I also spent time working on getting a good output sound for the system, but encountered some bugs along the way. Additionally, I was able to finish populating values into the markov tables that I am using. For next week, I hope to have these bugs fixed and more of the code done to import the Markov tables and use them to choose the next pitch.
04/25/2012 - Week 4
First, this week I created a bibliography of the literature that I have been using as a basis for this research. This bibliography will be updated as the project continues. Its current version can be found here (insert link). Also, due to ongoing issues with the real time interface and pitch tracking, I have created a chuck program that allows me to input numbers with the laptop keyboard that correspond to midi pitches and will output both the input and output midi numbers. This structure is the same as the function that will eventually be used for a system with real time audio input, but allows for easier debugging.
This week, My work was mainly focused on developing a general layout for the Markov tables that will be used. The proposed system will now use a set of 4 different tables. The first one (table 1) implements rules regarding acceptable melodic intervals and conventions for their sequencing. As such, it takes the most recent melodic interval in the generated line and provides a probability for each of the possible following intervals.
A second table implements limits on the melodic range at any given point, disallowing intervals that will result in a note outside of the melody range. This range is generally limited to Major 9th, however this range narrows towards the end off the phrases, in order to steer the melody towards the region it needs to be at for the cadence. (Table 2)
Finally the renainging rules will be implemented in a three dimensional table that relates both the current harmonic interval and the upcoming melodic interval in the bass to the possible upcoming interval in the melody. This structure allows for conrol of voice leading rules such as parallel fifths and octaves, as well as preferencing things such as contrary motion and avoidance of certain other cases. The main challenge with this implementation is that this table will be quite large (approx 14 x 17 x 7 = NNN elements). In order to do this, I propose to generate this table somewhat by hand, but using the penalty values in Schottstaedt and converting these to probabilities. For the prohibitions and Major errors (penalty infinite and 200 respectively) a probability of 0 will be used. For Errors with penalty 100 a probability of 0.001 will be used. This should prevent these cases from being used unless they are the only option. For the remaining errors the penalty is converted to a probability by a function of Prob = 1 - Penalty/100. I don't know yet whether this will work well, but it seems like a reasonable starting point for generating this table.
04/18/2012 - Week 2 & 3
Work this week was divided between two areas. First, I spent some time reworking the code from DuetYourself to only include the functionality that I would like to expand upon for this project (the additional generated tonal line) and removed some of the other functionality from this version. I also tested the pitch tracking with a cello input (it didn't work) and started working on ways to fix this issue. I am hoping that by using a longer input buffer (needed for low pitches) and moving the calculations (currently an autocorrelation) into the frequency domain I can improve the accuracy. If this doesn't work, some additional filtering may be needed to distinguish the fundamental from the harmonics, but hopefully the longer buffer will be sufficient to still do this computation in real time. In order to facilitate testing of this system, I am also working on adding the option of using a wav file as input per Chris' suggestion. This is not fully implemented yet, but is in the works. I also recorded some sample melody lines on the cello that can be used for testing purposes.
The second portion of work this week was devoted to trying to bring myself up to date on the existing literature regarding algorithmic counterpoint. Somewhat surprisingly, I did not find much at all on methods for doing this in real time. Almost all of the literature approached the problem as harmonizing with a given bass line, where the entire bass line is known in advance. Most of this research involved either a genetic algorithm or a forward working method of generating melody options, combined with some sort of scoring system to rank the possible outputs. In contrast to this area of literature, there has also been some work done on using Markov models to complete the task, which seems more applicable to the real time application that I envision.
Finally, the overall project design was fleshed out with more specifics regarding what I would like to be able to implement this quarter.
For next week, I hope to finish the baseline code with working pitch tracking, wav file input, and include the option to have more than one generated counterpoint voice. I would also like to use the time to decide on how I want to implement the counterpoint problem in my system.
04/07/2012 - Week 1
This week I decided on the general project area (algorithmic counterpoint) and laid out a general vision for what I would like the final project to look like. I think it should be a real time performance tool that will take audio input (from my cello for the current purposes) that will be used as a base line for the counterpoint. The system will then attempt to generate upper lines (hopefully 2 of them) in good counterpoint with the input.
In considering the scope of the project, It is likely that I will need to place some constraints on the input bass line (such as restricting it to cadence at the downbeat of every 8 bars, and remain within a single key), though I don't yet know exactly what these will be.
The system will be based partially on work I did in the fall quarter for music 256a developing a system that could generate a consonant counterpoint line for a given input, although it was not strict counterpoint, and did not generally result in a particularly musical output. For more information see DuetYourself.
For next week, I plan to work on revising the relevant parts of DuetYourself to create a baseline set of code for this project. In particular, I want to add the ability to read in audio from a file instead of the live microphone input, for use in development. Related to this, I also want to record a few selections of cello playing that can be used as test inputs to the system. Finally, I want to look more into the existing work in this area (of which there seems to be a lot) to figure out how to go about the counterpoint generation.