tulpasynth is a real-time collaborative music creation system.
It was initially developed by Colin Sullivan during the Music 256A course at Stanford’s CCRMA. It is named “tulpasynth” in the spirit of creation without boundaries.
tulpasynth runs in the browser. Users click on the page to add shapes which represent sounds. These shapes can be manipulated to modify the sound in various ways. There is a playhead which loops from left to right, and when it reaches a shape, the server renders the corresponding sound. The shapes can be manipulated by any connected client at any time.
The application is currently built for collaborative use on a local network, but could easily be translated for use over the internet. See future for more details.
The backend utilizes the elegant websocketpp1 library to synchronize the models between server and client.
To build tulpasynth, download from github, run
make on the server-side code, then serve the
client/public folder for the clientside code.
./configure will clone and build the “Boost” C++ library, which takes quite a while. If you already have boost installed and don’t want to wait, you can edit the
Makefile files accordingly. This script will also attempt to install the “jsoncpp”5 library which requires “scons”6.
To use tulpasynth after building is successful, start up the server:
Then host the
client/public/ directory with an HTTP server on the same IP address. I like Marak Squires’ “http-server”4, but that requires Node.js.
Then open a recent Chrome build7 to http://myipaddress and all should be well :)
It would be wonderful to spend more time on the graphics, specifically animations. I like the idea of sound and interaction influencing the visual aesthetics.
This application could be easily modified to send audio remotely and to render it on the client. The limitations here seem to be figuring out how to optimize uncompressed audio streaming over WebSockets.