SLOrkitecture

From CCRMA Wiki
Jump to: navigation, search

Network Architecture for the Stanford Laptop Orchestra (SLOrk)

David Bao, Isak Herman, Craig Hanson

Introduction

During the Spring of 2008, the Stanford Laptop Orchestra (SLOrk) was established at the Stanford University Center for Computer Research in Music and Acoustics (CCRMA). The orchestra is made up of up to 20 workstations composed of Apple MacBook laptops, custom hand-assembled hemispheric speaker arrays, and various controllers. Each laptop is essentially transformed into an instrument by means of software patches currently written in ChucK, a strongly-timed and on-the-fly audio programming language.

One of the largest difficulties with performing as an orchestra is when a performance piece depends on timing across the network. Currently, network-driven pieces have a single server that builds the network connections based on a hand-coded list of clients. This proves to be an extremely tedious approach to setting up the network and has caused numerous issues during performances. For example, computer names being misspelled and the failure to include certain computers on the clients list have inevitably lead to delays in the middle of performances.

The motivation for this project was to develop a more automated system for establishing and maintaining the orchestra’s network. Having such a system in place will have multiple advantages. First, network reliability will increase. The ability to depend on an automatic system as opposed to a hand-coded one for establishing connections will drastically decrease the errors that have been encountered in the past. In addition, such an architecture would help to streamline the development of additional network-based pieces. If the resulting architecture design is open, expandable, and well documented, then composers may be more inclined to push the boundaries of network-based pieces past the simple clock synchronization that is currently used.


Design

The SLOrkitecture is designed to be an underlying program that sets up the network architecture for performances by the Stanford Laptop Orchestra. It utilizes TCP to set up the initial connections and provides a standard base for new pieces to build their own network. In essence, it maintains a network foundation separate from what any given piece would actually use for its network communication

The design of the SLOrkitecture follows a hub-and-spoke model for the relationship between the server and its clients. When new clients are opened, they connect with the server and register themselves as clients of the network. The clients then wait for control messages from the server and thus require no further input from the user until a piece has begun.

Initially, the aim of the SLOrkitecture was to establish a standard method for all SLOrk performance pieces to use for network connections during the piece itself. However, the current design does not follow this paradigm so that it can prevent redundancy. The majority of the pieces in use at the moment use ChucK code to implement their UDP-based network for sending messages across the orchestra via OSC messages. It was therefore unnecessary to focus on creating a new system that would essentially achieve the same end goal, and thus the design of the SLOrkitecture leaves the actual network setup for a piece to be determined by the piece itself.

TCP was chosen as the protocol for the underlying network because of its reliability. It requires that both the server and the client open an available Socket for two-way communication that is optimized for integrity of the messages as opposed to speed. This works for the SLOrkitecture because the messages sent across the TCP network are not necessarily time-sensitive and the messages are small enough that packet loss should not be an issue.

Error Scenarios

Network reliability will always be an issue, regardless of the design of the architecture. Clients are liable to lose their connection with the Server throughout a performance, and thus some measures have been placed to be able to gracefully recover from these situations.

The general paradigm for the network is that if a Client is able to register itself with the Server, then it will be able to perform pieces with the orchestra as a whole. In other words, the TCP connection between the Client and Server must be maintained in order for the Client to receive the proper control messages to begin performing with the orchestra. Consequently, the best error recovery possible from losing the TCP connection before a piece begins is to restart the Client to try to reconnect with the Server. Most importantly, once a network-based performance piece has started, a Client that was not previously holding a steady TCP connection with the Server will not be able to join the rest of the orchestra.

There is a certain case that is taken care of by the SLOrkitecture design, however. If a Client loses its network connectivity with the Server in the middle of a performance piece, then the Client can simply restart, reconnect with the Server, and immediately resume performing the piece. This is because of two major considerations taken into account when beginning a piece. First, if the Client was performing with the orchestra in the first place, then the Server will still have the Client’s information in its Clients list, and thus the Server code will be able to rebuild any necessary network connections that are specific to the piece. In addition, the Server maintains its state, knows which piece is currently running, and thus is able to send a control message to a reconnecting Client to automatically have the Client rejoin the orchestra.


Future Work

Up until this point, the research on this project has been limited to a proof of concept where the Server and Client have operated on the same machine. Thus, the next step in the development of the SLOrkitecture is to port the software onto several of the SLOrk stations for testing. Installing the necessary software and libraries onto the machines should be simple, and the steps to setting up the software can be found in Appendix I. Given the functionality evident throughout the single-machine testing, the SLOrkitecture should be able to be easily installed across the orchestra and operate as described.

In addition, there are several extensions to the SLOrkitecture that may be possible to implement in the future. Currently, the only information that the Server has about each of its Clients is their respective machine names. It is possible, however, to extend the design of the SLOrkitecture to give additional information about each Client to the Server, such as user names and physical location in relation to the rest of the orchestra. Such extra data about the Clients can potentially give the Server more control over the messages that it sends to the Clients. For example, the Server could leverage knowledge about the Clients to assign different Clients to play different instruments based on their physical locations. Another extension based on this extra information may be a GUI interface for the Server to manage the Clients.