Downloads
Roil_2.0.zip - All Project Files
Roil_2.0_031610_1644.aif - Demonstration performance

Summary

Roil_2.0 is a computer-mediated live performance system. Its aesthetic foundation derives from "Roil", a piece that was created for a project involving the use of accelerometer data to drive an FM synthesis-based drone performance. Download Roil here.

After the experience of creating Roil, I wanted to design an instrument that provided the same sensitivity of control in multiple dimensions as an accelerometer. I am interested in the interaction with physical objects and the types of gestures these objects invite. In my experience, objects that possess external beauty generate a presence unto themselves, and thus influence the resulting musical interaction. With these criteria in mind, I created my custom instrument. Throughout the process, I knew that my 220b project would make heavy use of this device.

Initially, it seemed obvious to me that the first sound I should generate with the touchbard would be the output of the Roil patch using my new controller as the input. However, the inception of my 220b project coincided with the completion of the device and thus I created Roil_2.0 completely from scratch as a computer-mediated performance system:



The system consists of four main sections:

  • Control input
  • Sound synthesis
  • Computer-mediation
  • Visual feedback

  • Now for a detailed description of each of these sections

    The Control Input Section

    Each of the sliders in this section corresponds to one of the 12 analog inputs coming from the touchboard. The toggles on top correspond to the 8 LEDs, and the buttons to the left show the activty of the encoder. The controls are mapped to a quadratic curve for better resolution in light touch gestures. The output of the faders in this section goes to the sound synthesis section.

    The Sound Synthesis Section

    The four touch fader channels of the touch board control four FM synthesis voices defined in this section. The pitch of each voice is determined by the linear position of the touch along each strip. The output of the position sensor is fed into a lookup table that outputs the actual pitch to play (the pitches used are determined by the computer mediation section, described below). The FM synthesis voice is a simple two-operator scheme with a modulator frequency also determined by the computer mediation section. The index of modulation is determined by the pressure input of each touch fader.

    The Computer-Mediation Section

    The pitches available to the player at any given point in time are taken from a set that defines a mode or chord. A series of these pitch sets were precomposed and arranged on a timeline. The same was done for FM modulation frequencies. As time advances, the pitch sets are swapped out and the player must adapt accordingly. The computer's involvement goes further by monitoring all input data coming from the surface of the touchboard. Certain gestures are counted as "activity" and accumulated over a period of time. If the activity threshold is crossed, then the computer will not advance the timeline, but remain on the current pitch set or FM frequency. Because of this behavior, if the player would prefer to remain in a particular harmonic or timbral 'space', a considerable amount of activity is required. In this way, the performance of always moving--whether by advance through sonic landscapes or 'roiling' within one.

    The Visual Feedback Section

    It is necessary for the performer to know what is going on in the system as the performance progresses. For example, the performer may want the system to 'sit still' in a given sound space, yet at the same time want the sonic character to be a static drone. Since there is a minimum amount of input required to keep the system 'sitting still', the mediator's time clock and event count are provided in order to allow the user to effectively 'play' the system. In addition, a timer was created to show how much time has elapsed over the course of the performance.