Welcome to my page for 220c! I wrote a piece of music using free open-source software.
You can find a recording of the piece on my music page, and more files and notes will be placed on this page as I get time.
Program notes:
This piece was composed specifically for the 2011 Music 220c course at Stanford University's Center for Computer Research in Music and Acoustics, from March to May 2011.
It is a sequenced piece which uses various freely-available software synthesizers and a MIDI-controlled acoustic piano. Each synthesizer is assigned to one or two speakers, resulting in the same kind of spatial separation you would get from individual instruments playing through their own amplified speakers.
The piece is loosely based on falling asleep and dreaming. The synthesizers represent the unconscious or imagined, and the piano represents the physical world or sense of self within the dream. The piece has fast and slow parts and jumps between various themes. Overall it is high-energy and happy; a good dream.
Michael J. Wilson is a Masters student studying Music, Science and Technology at CCRMA. He has been dabbling in MIDI sequence-based composition since the mid-1990s. A stereo mix as well as links to files and software used will be made available a few days after the concert at https://ccrma.stanford.edu/~mwilson/music/
I've finished the first "presentable" version of the piece. This is an important milestone. What remains is tweaking things and making sure I have the best possible setup for the concert
The piece is coming along nicely. There are still significant parts of the piece that need to be filled out, and a few more things I would like to do that probably aren't practical given current time constraints, but I think it will be in a presentable shape in time for the concert.
Finished sketching out the various sections of the piece. Still pretty rough but overall parts are done (or done enough). Just need to flesh things out and polish now.
Playing in control parameters on top of recorded tracks seems to work just fine. So I can get some fine control. Also noted that Disklavier needs some careful attention to velocity values in order to sound smooth / natural.
I found that using the Disklavier, I have to delay-compensate the software synthesizers about 50ms to get things to line up close to properly on the stage.
In Rosegarden delay compensation appears to be done per segment, not per track, so you have to redo it for every segment you add.
Composition is proceeding but I'm still not feeling very confident about the piece.
Finally getting somewhere. I set up my patchbay to work in studios D and E:
I will need something slightly different for the stage; I plan to set that up later today or sometime next week. I also found out how to send program changes to Hexter as a DSSI plugin in Rosegarden:
All three hilighted lines are necessary (basically, sending back select as well as program change). This will let me vary the timbres I use during different parts of the piece. These references were helpful: http://www.indiana.edu/~emusic/cntrlnumb.html and http://comments.gmane.org/gmane.comp.audio.rosegarden.user/10292
I found that you should send the program change and bank select messages a bit before playing any notes, otherwise they won't have time to propegate through. I send them one beat before and that seems to work pretty well (at 160 BPM).
Not much progress so far. But I think I have a strong idea to guide the piece: idea of dreaming. Piano represents person, or physical world. Other instruments represent dreams. Piece starts on the brink of sleep, then accelerates into dreams, then ends with waking up in the morning. This probably won't directly come across, but I feel like the image is strong enough for me to use as I'm composing.
I've been working out more technical details. My laptop can only run ~1 synth plugin at a time, but the CCRMA workstations seem to be able to handle 8. The Disklavier is not only a great way to produce sound, it's also an excellent MIDI controller. It captures a lot of nuance. Balancing the other sounds against a grand piano is going to be a challenge. In my initial testing they just didn't stack up very well (some of this may have been volume). As is finding ways to get the control data the way I want it. I want to maintain some organic feel but make things very preciese, and I don't want to play everything in live. I'm using Rosegarden to do everything right now and I don't see a better way to edit control data than the MIDI event list which is a bit cumbersome. But at least there is a way.
I also need to work more on what I'm actually trying to say with the piece. I need some sort of overarching story or goal to guide me compositionally. The end is something like "coming home" but the rest isn't quite clear to me yet. It will help me to think this through.
Inspired by the recent Jean-Claude Risset and Seth Horvitz concerts here at CCRMA, I am planning on doing a piece for Disklavier and eight speakers. I am planning to have each speaker represent a different instrument. Possible instruments: FM drums (refined from my 220b homework), another type of drums, DX7 emulation, analog synth emulation, simple oscilators with reverb, synthesized singing voice, synthesized electric guitar, bass analog synth emulation. You can get some pretty crazy timbres out of the Disklavier with careful programming. I am envisioning something that doesn't explicitly feature one instrument all the time, however. And it will have more synthetic and more organic sections.
I'm thinking that maybe score following will not be appropriate for the piece. This is for two reasons: one, I am not a very precise pianist. Two, the piece as I'm currently conceiving it doesn't really have anything that makes score following necessary (besides as a sort of auto-accompaniment I suppose). It would be a bit of an unnecssary gimick in this contex, and I feel that I would grow to resent it.
Maybe I can do a piece for Disklavier and synthesis instead? It's basically just writing a MIDI file at that point. I don't know if that is researchy enough for the course, but perhaps it will suffice. Plus I think it would be neat getting an accoustic sound source as well as possibly multiple speakers to handle different voices. Making the web-worthy mix could involve recording the Diskalvier in the recording studio. I could use a lot of the skills I've gained the past two terms.
I'm also going to try to develop some technology in 424 and 420 that can be applied to the piece. So the piece itself will be for 220, but most of the technology will not be applied towards this course. I think they will help each other though; using technology gives ideas for developing it, and developing can give hints into best usage. There's always the risk of developing a system that only I, the creator, can understand or appreciate, but I don't necessarily view that as a bad thing :-)
I've been trying to work out the overall structure of the piece. There are a few themes I want to incorporate, and specific timbral changes that I think will make them poignant. But it's always hard to know how things will turn out when they're still in my head. I will try to record a demo soon using a soundfont or something similar just to get an initial feel for how the piece will go, and then perhaps it will be time to start constructing timbres.
Rough ideas: I want to make a piece using techniques and technology I've learned about / developed over my time here at CCRMA
Possible leads for score following / etc: Roger Dannenburg (CMU) http://www.cs.cmu.edu/~rbd