Network Musical Performance

Abstract
Network Musical Performance is a way for several clients to join a session over the internet in a sort of MIDI chatroom. Each client sends local MIDI data to each of the other members of the session. The term itself and an sfront implementation are the creation of John Lazzaro and John Wawrzynek. In his program, SIP is used to handle connection details, RTP MIDI is used to transmit MIDI messages, and SAOL is used to define instruments. All of this is built into sfront, along with the sound generation itself. My goal is to separate the sound generation from all MIDI and network routing so that a user is able to use the other clients in a session as additional MIDI controllers. My plan is to create an NMP client which will handle network and MIDI communication, and leave sound generation to other software or hardware as the user desires. For the purposes of this project, my implementation will borrow heavily from sfront and will only operate on GNU/Linux using ALSA.

05.27.05
Changed title; added abstract; preparing for Open House. I realize now that Jack does not handle any MIDI routing; instead, qjackctl is merely one of several ways to control ALSA, which does the real work.

05.17.05
On May 10, i did a quick demo showing a 2 client NMP session with John Lazzaro at Berkeley. Here is a brief doctumenation describing how to set such a session up on a GNU/Linux computer. My current goal for the project is to create a simple pipe through which MIDI messages can be passed, with Jack on one side and the network connection on the other. Using such a MIDI pipe would allow any program that can send MIDI to Jack the ability to send MIDI to any remote instance of Jack.

05.03.05
One very interesting thing about sfront is so-called Network Musical Performance. It is basically a combination of MIDI input on one or several machines and a networking model which allows several clients to talk to each other. Instrument definitions for such a performance should be distributed before starting the session [though it is certainly possible to have each client running a different set of instruments]. The networked clients send MIDI data to all other clients connected to the session, and each client synthesizes sound from locally and remotely generated MIDI events on the fly. Regarding the project, i still don't know exactly what i can do to supplement what has already been done, so i do not have an abstract yet.

04.14.05
I'm still thinking a lot about what exactly to do here. I'm interested in cutting down on unnecessary audio data, especially if the sounds are all synthesized to begin with. Why not re-synthesize them on the decoder? Unless there are some complex non-real-time sounds being made, it makes a lot of sense to me to be a little smarter about our encoding scheme. Unfortunately, i don't know of an MP4-SA decoder out there, so any new encoding methods developed will not be overly benefitial to general music distribution. Perhaps it would be best for my project to focus on the decoder. Still thinking...
Motivation for this project comes from downloading and listening to recent compositions for the GameBoy portable video game unit. The sounds produced are either generated with very low complexity, or arbitrary 4-bit samples. It occurs to me that a more elegant representation could be used for these songs than traditional perceptual audio coding methods. After all, nothing produced has more than 4 bits of precision, but this is sampled with 16 bits before being coded! Using a well-designed Structured Audio scheme, these songs would not only take up significantly less space [~200KB instead of ~5MB] but would also be more or less perfectly encoded. [Many GameBoy composers write their music on actual GameBoys, and may choose to record with a microphone or through the headphone jack. I assume that the headphone jack is used since the speaker is monophonic. Either way, i expect that there are subtleties that a 'dumb' system which tries to emulate only the generation method would not capture.]

Links
Network Musical Performance
sfront
GameBoy Music Workshop [the first section is where i got my information on the GameBoy's synthesis method.]