ChipStation

From CCRMA Wiki
Jump to: navigation, search

ChipStation

A Fall 2011 Music256a/CS476a project proposal page by Timothy Wong (aka DDRKirby(ISQ)) ddrkirby@stanford.edu

Idea / Premise

To extend upon the concept presented by the AXE homebrew program for Nintendo DS.

Watch a video of AXE here: [1]

AXE is downloadable at [2].


Motivation

Axe is a really cool concept, but pretty darn basic and not very exciting after a few minutes of noodling around.

The basic idea of making a music-making machine where it "sounds cool" even if you're just messing around is very appealing. I feel like this is great because non-musicians can still make things that sound cool (though musicians can make things that sound even cooler =P).


The Thing (What is it)

Unlike AXE, this will be a Windows/Linux/OSX cross-platform product so that it's easier for your everyday layman to use.

As for the interface, imagine AXE, only multiplied by 4 or so. Right now I'm envisioning multiple different "pads" presented on the screen in a grid layout. You can mouseclick any of these "pads" to get the same effect as in AXE.

A major difference here though, is that you'll be able to record what you play, and have it continually playback to you. This is how you construct multiple layers, kind of like overdubbing. So you can record an 8-bar drum beat, then on top of that put in a bassline, and then a lead line, etc. Getting the right UI for the recording is going to be tricky (it might just be automatic).

I'm going to be pretty minimalistic with my UI design since I want to keep things simple--at least for now. Making things pretty can come later.


Design

No networking will be involved--that's just needlessly complicating things.

In terms of programming architectures, I'm probably going to be using SDL, as it's cross-platform, well-established, and I've used it before. GUI will probably be mostly static 2D.

There are a few major design considerations here:

1) How to synthesize the sound? On the fly? Or sampled?

I think the answer to this one is "both." For drum samples we should just sample them, but for the leads and basses, we might want to synthesize them ourselves so that we can have finer-grained control over things like timbre or volume. I think I'll probably end up using stk for this, meaning it'll probably be easiest if all of the audio in my app is done via rtAudio/stk, so that I won't have to intermingle SDL audio code with it.

2) How to restrict the sound so that it sounds good even if you're messing around?

AXE does this by using a pentatonic scale. I'm thinking of doing similar things...pentatonic is one, but blues scale might be another. Obviously, quantization will also be key here, and I'm probably gonna do it similarly to AXE, with a slight "buffer window" to allow some room for late hits. This will probably be very customizable via some sort of options dialogue, ideally.

I think it's important to abstract out the "pad" interface, since a drum pad might work slightly differently (or even look differently) than a lead pad, for example.


Testing

The primary tester will of course, be me, since I think I'll be the primary user as well ;)

However, it'd be really good if I can show this to my friends and they figure it out too. Hard to say though; making user interfaces that are not confusing to everyday users is a lot harder than one would think...


Team

Just me. ;)


Milestones

  • 11/14 - Have "something" working. This probably means having a basic pad abstraction along with a very basic example, that plays SOME sound.
  • 11/21 - (Thanksgiving break)
  • 12/5 - All the framework should be done by now, which means the only part left is to actually make the different pads.
  • 12/12 - Presentation week: everything should be done by now!