Create a utility that will allow performers to request, encourage and utilize audience participation in their concerts.
we want to create a system that invites audience participation as a way to engage the audience in the performance.
The final project will be an interaction between the performer and the audience. As a demo we will have the performer(s) use samples and voice to create auditory and visual themes that they will evolve with audience participation.
The performer will have a graphical interface for the touch portion of the interface. This will most likely be an OSC application on an iphone or ipodtouch. This will allow the performer to change how much control the audience has over the performance.
The performer will also have a microphone as input to the system. This will allow speech, singing, and rhythmic input to the system.
Using their microphone for both triggering prequeued samples and vocal input the performer will create musical themes. These will be turned into visuals through signal processing and both audio and visual will be mutable through audience participation.
The Audience will have an OSC application on a phone will be able to in real time to interact and engage with the performance in a subtle but noticible way.
There will be a visual display that will show both performer and audience participation. base content will be created based on the performers audio output. visual modulations such as rotation, scaling, color, and change of generating algorithm will be audience influenced.
The microphone will be used to input audio signal into the system. Signal processing techniques will be used on the signal to determine if that signal passes some predefined threshold. this will allow the performer to sing and talk into the microphone as normal, but also by using their own volume trigger pre-made samples and vocal effects. In this way they can provide their own accompaniment.
The touch input will be used to modulate the amount of audience participation that the performer is looking for. this will be either an overall audience "mute" or scaling the audience input individually to both audio and visual.
Audience will be able to add and minupulate small visual elements through OSC signals from a touch or text input system. Users need to be able to pick out their on contributions to the display, through unique or semi-unique color assignments.
Audience will also be able to add or manipulate small auditory elements through the OSC signals. This will be determined by the performer based on how much audience interaction they want at various times during the performance. in the audio domain the audience will be able to control the saturation of various filters on the voice and/or other audio output.
There will be a visual cue in the visualization that shows the audience how much participation in the audio the performer is asking for.
The main new portion of this project will be the underlying system to incorporate audience participation into the performance. On the audience side this will be as simple as mapping one of the freely available set of osc controllers to the inputs of the system.
on the system side things will be a little more complicated. Since we want to make this modularized so that other programmers could conceiveably tie it into their own systems we need to abstract the inputput of the users from the actual controls of the system. it will then be a matter of mapping the types of messages that the osc iphone controlers output to a couple of different kinds of generalized controls that the system will output as events. This will include a event that outputs the integers between 0 and n, an event that will give a float between 0 and 1, and an event that will trigger on/off messages. These events will then need to be used by the programmer to change various parameters within their musical performance.
As an example we could map the accelerometer from the iphone to a float between 0 and 1 that controls the saturation of a filter on the signal. As another example we could have the integer event mapped to a set of samples and the trigger event mapped to triggering that sample so that between two audience members we would get different samples played.
Given abundant time it would be interesting to explore how multiple audience members could interact on the same auditory or visual object at the same time, but this will likely fall outside the scope of this project.
Success will be determined by the engagement of the audience. If we can actively engage people in the performance we will succeed.
We will test this by having a performance where there are several iphone type devices already set up and audience members can pass them around or walk up to a podium where the devices will sit. since we're interested in the audience participation there isn't a specific need to have a real performance. It's possible that a recorded audio/visual track with the correct tie-ins will be sufficient to bring poeple in to interact with. this also may make people less self-concious about interacting with the performance.
Jesse Cirimele