MoPhO-2009

From CCRMA Wiki
Jump to: navigation, search

DO MOBILE PHONES DREAM OF ELECTRIC ORCHESTRAS?

Ge Wang Stanford University Center for Computer Research in Music and Acoustics (CCRMA) ge@ccrma.stanford.edu

Georg Essl Technical University of Berlin | Deutsche Telekom Laboratories georg.essl@telekom.de

Henri Penttinen Helsinki University of Technology Department of Signal Processing and Acoustics henri.penttinen@tkk.fi

ABSTRACT

MoPhO is the newly established Mobile Phone Orchestra of CCRMA that also led to an off-spring in Helsinki and has good fellow ensemble-members in Berlin. It is the first repertoire- and ensemble-based mobile phone performance group of its kind. We describe the motivation and making of such an ensemble and the repertoire of MoPhO’s first concerts, with the first concert performed in January 2008. The ensemble demonstrates that mobile phones orchestras are interesting technological and artistic platforms for electronic music composition and performance.

1. INTRODUCTION

MoPhO is the Mobile Phone Orchestra of CCRMA — a new repertoire-based ensemble using mobile phones as the primary musical instrument. While mobile phones have been used for artistic expression before, MoPhO is the first (to the best of our knowledge) to approach it from an ensemble/repertoire angle. It employs more than a dozen players and mobile phones which serve as a compositional and performance platform for a expanding and dedicated repertoire. In this sense, it is the first ensemble of its kind. MoPhO was instantiated in Fall 2007 at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and performed its debut concert in January 2008. Since it has performed in Genova, Belfast, Helsinki, San Francisco and Berlin and has spawned a new ensemble in Helsinki.

Mobile phones are growing in sheer number and computational power. Hyper-ubiquitous and deeply entrenched in the lifestyles of people around the world, they transcend nearly every cultural and economic barrier. Computationally, the mobile phones of today offers speed and storage capabilities comparable to desktop computers from less than ten years ago, rendering them suitable for real-time sound synthesis and other musical applications. Like traditional acoustic instruments, the mobile phones are intimate sound producing devices. By comparison to most instruments, they are rather soft and have somewhat limited acoustic bandwidth. However, mobile phones have the advantages of ubiquity, strength in numbers, and ultramobility, making it feasible to hold jam sessions, rehearsals, and even performance almost anywhere, anytime. A goal of the Mobile Phone Orchestra is to explore these possibilities as a research and music-making body. We investigate the fusion of technological artifact and human musicianship, and provides a new vehicle for experimenting with new music and music-making.

We see the mobile phone orchestra idea matching the idea of a laptop orchestra [24, 18, 25, 9]. The phones as intimate sound sources provide a unique opportunity to explore ”mobile electronic chamber music”. The Mobile Phone Orchestra presents a well-defined platform of hardware and software configuration and players, enabling composers to craft mobile instruments and write music tailored to such an ensemble. Furthermore, the combination of technology, aesthetics, and instrument building presents a potentially powerful pedagogical opportunity, which compared to laptop orchestras gain the added benefit of extreme mobility.

2. RELATED WORK

Turning mobile devices into musical instruments has already been explored by a number of researchers. Tanaka presented an accelerometer based custom-made augmented PDA that could control streaming audio [20]. Geiger designed a touch-screen based interaction paradigm with integrated synthesis on the mobile device using a port of Pure Data (PD) for Linux-enabled portal devices like iPaqs [13, 12]. SpeedDial offers interactive mapping and editing of mobile phone instruments during live performance (Essl 2009). Various GPS based interactions have also been proposed [19, 23]. Many of these system used an external computer for sound generation.

Using a mobile phone as physical musical instrument has been pioneer by Greg Schiemer [17] in his PocketGamelan instrument. At the same time there has been an effort to build up ways to allow interactive performance on commodity mobile phones. CaMus is a system that uses the camera of mobile phones for tracking visual references to allow performance [16]. CaMus2 extended this to allow multiple mobile phones to communicate with each other and with a PC via an ad hoc Bluetooth network. In both cases an external PC was still used to generate the sound.

The MobileSTK port of Perry Cook’s and Gary Scavone’s Synthesis Toolkit (STK) [4] to Symbian OS [7] is the first full parametric synthesis environment available on mobile phones. It was used in combination with accelerometer and magnetometer data in ShaMus [8] to allow purely on-the-phone performance without any laptop. Specifically the availability of accelerometers in programmable mobile phones like Nokia’s N95 or Apple’s iPhone has been an enabling technology to more fully consider mobile phones as meta-instruments for gesture driven music performance. The main idea for the mobile phone as a meta-instrument is to provide a generic-as-possible platform on which the composer can craft his or her artistic vision. At the same time the abilities offered by the phone have to be in a sense stabilized to offer a persistent repertoire for an ensemble.

There is also earlier body of work using mobile devices as part of artistic performances. In these, mobile phones did not yet play the role a traditional instrument within a performance ensemble. Golan Levin’s DialTones performance is one of the earliest concert concepts which used mobile devices as part of the performance [14].

The concept of the performance is that the audience itself serves as part of the sound source display and the localization of people in the concert hall is part of the performance. A precomposed piece is played by calling up various numbers of members of the audience. Visual projections display the spatial patterns that make currently sounding telephones.

The main conceptional use of mobile phones in this concert was passive yet spatial in nature, blurring the performer and audience boundary. The art group Ligna and Jens R ¨ ohm created an installation performance called “Whlt die Signale” (German for “Dial the signals”). The performance used 144 mobile phones that were arranged in an installation space. People could call the publicised phone numbers and the resulting piece would be broadcast over radio. Unlike Levin’s piece the compositional concept is aleatoric, meaning that the randomness of the calling participants is an intended part of the concept [2].

A performance installation that used mobile technology indirectly, and predates both Levin’s and Ligna’s work is Wagenaar’s “Kadoum” [2]. Here heart-rate sensors were attached to 24 Australians. The signals were sent via mobile phones to other international locations where electric motor excited water bucket installation would display the activity of the Australians. Here mobile technology was primarily used for remote wireless networking and the mobile devices themselves were not an inherent part of the concept of the piece but rather served as a means of wireless communication.

Wagenaar’s piece serves as an example of what we will call “locative music”. This is music where distributed location plays a conceptual role in a piece. Some authors think of mobile music making as referring to the mobility of the performance itself and not just of the potential of such mobility. This view is reviewed by Gaye et al [10] who work with the definition “Mobile music is a new field concerned with musical interaction in mobile settings, using portable technology”. Atau Tanaka and Lalya Gaye provide some of the more prominent examples of locative music. The term “locative music” is closer to the term “locative media” used by Tanaka and Gemeinboeck in this context [21].

Gaye’s Sonic City used a variety of sensors attached to a special jacket. These sensors pick up environmental information as well as body-related signals which in turn modifies music heard by the person wearing the jacket through headphones. For example the jacket would be able to pick up signals such as heart-rate, arm motion, pace and compass heading. Sensed environmental data included such things as light level, noise level, pollution level, temperature, and electromagnetic activity. As the location and the environment changed, the sonic experience varied with it [11].

Tanaka explored locative music in various ways. A project called Malleable Mobile Music explored the ability to make passive networked music sharing into an active endeavor, by providing an interactive music engine with associated rendering capabilities into the mobile devices of participants. Tanaka’s installation piece net d'erive brought this further into a performance concept. The installation consisted of two settings, one in a gallery with large scale video projections and the other mobile using scarfs with mobile phones embedded at both ends. These were handed out to audience of the performance. Through headphones participants heard instructions and the phones took pictures and recorded sounds of the environment the audience explored. By the participants following or deviating from the instructions the piece maintained an aleatoric component reminiscent of the Ligna piece discussed earlier. Through GPS and wireless communication, their position and information were traced and displayed in the gallery space where the visuals and sounds changed with the choices made by the moving audience [22].

Recently the emergence of the iPhone has sparked increased interest and activity. Popular commercial instruments like Smule's Ocarina (Wang 2009) or ZoozBeat (Weinberg et al 2009) have emerged. The iPhone is also an attractive platform for research offering a blend of fast computational power, rich sensory capabilities and an very clean and low latency audio architecture (Essl and Rohs 2009). Hence we also see projects such using tactile feedback (Gillian et al 2009). An earlier version of this article appeared in (Wang, Essl and Penttinen 2008).

3. THE MOPHO ENSEMBLE

The Mobile Phone Orchestra of CCRMA consists of 16 mobile phones and players, and at this early stage, contains a repertoire of 8 publicly premiered pieces ranging from scored compositions, sonic sculptures, to structured and free improvisations. So far, all pieces have solely used the phones’ onboard speakers (and occasional human vocalization) for sound production (no additional amplification or reinforcement), keeping true to our notion of ”mobile electronic chamber music” and the potential of ultra-mobility.

Currently, MoPhO uses Nokia N95 smart phones and Apple's iPhones and iPod Touches, though in principle we are open to using any mobile device. It is worth noting the onboard features of the N95 and the iPhone to provide an assessment of the capabilities of today’s phones. The N95 offers 1) a five mega-pixel video/still camera, 2) a second front-side camera, 3) microphone, 4) stereo speakers, 5) 20-button keypad, 6) 3-axis accelerometer, 7) Bluetooth,

8) Wi-Fi, 9) 320x240 resolution color LCD display, and 10) 330 MHz CPU. In terms of software, the phone runs Symbian OS, with an freely available and extensive software development kit in C++ that allows access to all of the above hardware features, compiler, emulator, and optional integrated development environment. The iPhone replaces the 20-button keypad with a multi-touch screen capable of sensing 5 concurrent moving contacts, a larger screen area of 480x320 pixels and a 412Mhz CPU. As operating system and software development environment the iPhone uses a modified version of MacOS and shares the same development environment as other Apple platforms, XCode, which allows for on-device and simulator debugging and a fully integerated development environment.

One nice property of the iPhone architecture is a very clean and low latency audio architecture, especially in full-duplex. This allows interactive play based on microphone signals that result in audio with low latency and hence high performability. Our tests show possible a microphone-to-playback latency as low as 30ms. iPod Touches do not easily support microphone input however, but still serve as an attractive platform thanks to the same audio pipeline, matching architecture and multi-touch, accelerometer and other sensor capabilities.

For N95 phones, MoPhO also employs the Python virtual machine application, which allows us to write audio synthesis engines in C++ and combine them with Python front-end GUI’s. The potential of the phone is indeed immense, though in our experiences so far, the development process presents unique overheads in terms of licensing, and general awkwardness naturally associated with a still-maturing platform. While the licensing and provisioning still has some tedium for iPhone the overall platform does feel more mature, which certainly has kick-started a lot of projects in the area overall.

The ensembles uses a number of pieces on commercial musical applications for iPhones. Specifically we use Ocarina and Leaf Trombone by Smule as well as BeBot by Normalware. Ocarina and Leaf Trombone are iPhone based wind instruments, where the microphone signal is used to detect the turbulence sound created by blowing. This, in conjuction with multi-touch fingering and sliding offers rich performance possibilities.

The radiation power of typical mobile devices is somewhat limited. While it is possible to play small concerts without any amplification if enough devices are used the size of the venue and its acoustical properties does impose restrictions on the feasibility of performing with unamplified devices. Hence we have explored means to amplify the mobile devices. We wanted to retain the localized nature of the radiating sound and hence the intimacy and mobility of the performance. To this end we chose portable speakers (Altec Lansing Orbit) to be attached to the performers. A pair of these speakers were sown onto finger-free biker-gloves. This allows the performer to still have good control on the device, while also giving control over the radiation directivity by turning ones wrists in performance. We also used neck or belt attached speakers. This solution overcomes the amplitude problem for most venues and was successfully used in playing a large concert space such as the concert venue at MacWorld 2009 which was capable of seating over 1000 people.

The Mobile Phone Orchestra performed its first public concert on January 11th, 2008 to a packed audience at the CCRMA Stage at Stanford University. It featured the 8 initial pieces of the MoPhO repertoire, all composed especially for the mobile phones. Since then new repertoire was added and concerts have been performed in Genova, Belfast, Helsinki, San Francisco and Berlin.

4. ORIGINAL REPERTOIRE

4.1. Drone In/Drone Out

Drone In/Drone Out (Figure 2; a.k.a. ET:Drone:Home) is a structured improvisation for 8 or (many) more players/phones, composed, and programmed by Ge Wang. Based on the laptop orchestra piece Droner by Dan Trueman (see [18]), Drone In/Drone Out explores both individual and emergent timbres synthesized by the phones and controlled by the human players. The phone generates sound via real-time FM synthesis, and maps the two accelerometer axes to spectral richness (via index of modulation, up/down axis), and subtle detuning of the fundamental frequency (left/right axis). This results in rich, controllable low-frequency interference between partials, and creates a saturating sonic texture that permeates even large performance spaces despite the limited output power of onboard speakers. Additionally, preprogrammed pitches and modulation ratios (selectable via the phone’s number pad) allows the ensemble to move through a variety of harmonies and timbre-scapes, as directed by a human conductor. Furthermore, by experimenting with modulation ratios and spectral richness, the resulting partials can suggest the percept of low fundamental frequencies well beyond the limited bass response of the phone speakers.

Due to the extreme mobile nature of phones, players may be placed almost anywhere throughout the performance area, and furthermore, are able to easily move during a performance and/or even play from the audience. For example, during the MoPhO debut performance at the CCRMA Stage, we began the concert with members of the ensemble sitting, disguised among the audience. The remaining players marched in with phones droning, as the disguised players revealed themselves and moved to surround the audience (resulting in 12 players/phones). A reprise of the piece (Drone Out) closed the concert, exploring additional spatial configurations of phones before players exited stage / returned to the audience.


4.2. TamaG

TamaG (Figure 3) by Georg Essl is a piece that explores the boundary of projecting the humane onto mobile devices and at the same time displays the fact that they are deeply mechanical and articifical. It explores the question: how much control do we have in the interaction with these devices or do the device itself at times controls us. The piece works with the tension between these positions and crosses the desirable and the alarming, the human voice with mechanical noise. The alarming effect has a social quality and spreads between the performers. The sounding algorithm is a non-linear algorithm called circle map [5, 6] which is used in easier-to-control and hard-to-control regimes to evoke the effects of control and desirability on the one hand the the loss of control and mechanistic function on the other hand.

The first regime consists of single-pitch like sounds that resemble the human voice. When the non-linearity is gradualy increased the performer enters a non-oscillatory regime and the voice stutters in and out. The second regime is mechanistic noise that too can be controlled, but due to the highly non-linear behavior this control is very difficult.


4.3. The Saw

The piece The Saw (Figure 4) by Henri Penttinen uses mobile phones as key-pitched intruments. In this case the keys are mapped the the Phrygian mode. Performers are placed in a semi-circle which allows the conductor to explore a variety of panning effects. By emphasizing simple numeric scores and very direct conducting gestures, the piece is written to be playable by trained musicians and non-trained performers alike.


4.4. The phones and the fury

The phones and the fury by Jeff Cooper and Henri Penttinen is a DJ-man-style one-performer table-top piece where multiple phone play the role of the individual players playing looped music. The playback rate can be controlled by tilting the devices. By interweaving of looped patterns it references the cross-mixing of a DJ performance.

The solo instrument for this piece, the Pocket Shake, was created by Jarno Seppanen and Henri Penttinen. It is a wavetable synthesizer with three sine waves, whose frequency is mapped from the three axes of the accelerometer. Fast movements result in quickly changing timbres and sine sweeps. The piece was played by one person, but more players can be easily introduced.


4.5. Circular Transformations

Circular Transformations is a collaborative and experimental work by Jonathan Middleton and Henri Penttinen. The piece is composed for a mobile phone ensemble of 5-10 players, and is structured in the same manner as an organum with four clausula sections. The title gets its name from the circular patterns of a harmonograph [1] set to the ratio 5:3 (major sixth). From the rotary shapes Jonathan was able to translate the lines into musical patterns by mapping the actual forms of the lines into number representations. The post production of the notes and numbers was done in the software called musicalgorithms [15]. The tones were created from a combination of slightly inharmonic FM-synthesis [3] and circle maps [5, 6] sounds and controlled by a simple sequencer. The piece can be either played with a collective synchronization and letting the players control the timbres of their part or with a conductor who gives timing cues of each part. The spatialization was formed as a semi-circle with one bass player at both ends and the other players situated in pairs. In addition, the pairing improved interaction between the players for creating different timbres of the same part.


4.6. phoning it in

Chris Warren’s phoning it in is a mobile phone “tape piece” performance, where the performers act as diffusers in space. The piece is spatialized and each phone carries a different component of the composition. By positioning and orienting the phones, the players diffuse the piece through the performance space. The tape composition is tailored specifically to the bandwidth of the mobile phone play-back by using compression and other techniques to maximize the utility of mobile phones as highly mobile distributed diffusers.


4.7. The MoPhive Quintet: Cellphone Quartet in C major, op. 24

Adnan Marquez-Borbon’s MoPhive Quintet (Figure 5) is a free-form improvisation for four or five players exploring iterative live sampling via onboard phone microphones and speakers. At any time, players are encouraged to vocalize, capture the sound of other human players or phones, and/or playback a previously recorded clip. As the piece evolves, new vocalizations are intertwined with samples potentially passed from phone to phone via live, on-the-fly recording. This piece is carried out with the default sound recorder software provided with the phone, and compellingly and playfully suggests new group musical possibilities using common phone features.


4.8. Chatter

Ge Wang’s Chatter is a conducted improvisation for 12 (or more) players and phones, and employs a simple software buffer playback that maps an axis of the phone accelerometer to playback rate. The players are distributed throughout the audience in an effort to immersive the audience in a sonic web of cell phone conversational clouds that range in theme from greetings to weather reports, laughter, and sheer wackiness. The source material consists of short sentences, laughters, and various guttural utterances (courtesy of Georg Essl) that are triggered via the phones’ number pads, easily permitting rhythmic interplay between phones (when desired). More recent instantiations of the piece contain utterances of nine additional speakers, most German, but some also in Portugese, Hindi, Dutch, and French (courtesy of Berlin members of the ensemble).

4.9. Botz Groove

Botz Groove is a two part call and response riff piece played on a pentatonic scale with the BeBot instrument, written by Georg Essl. It features a backdrop over which individuals play soloes in various voices provided by the BeBot instrument. The piece explores more traditional pitched performances in modern, yet conventional syncopated rhytms combined with flexible free-form improvisation. In particular BeBot's PWM voice is used to evoke resemblance of an electronic guitar solo.

4.10. T-Fare

T-Fare, by Georg Essl, is a two-part, two-voice Ocarina piece with prerecorded sound. It is a variation on commercial mobile phone ring tones, played and reharmonized for the Ocarina. The piece is not strictly tonic. While playing in thirds, and forths, the base of the played phrases uses the full range of the twelve-tone diatonic scale. The piece is designed to reenact traditional voiced diatonic polyphonic performance in a mobile phone ensemble. The piece starts with a few members of the ensemble planted in the audience and accidentally having their mobile phone ring with the well-recognized ring tone, the ensemble then responds by playing the phrase which slowly morphes into the piece of variations on the theme, hence playing humorously with the boundary of expecation what is ring tone, hence alerting, and what is musical performance.

5. CONCLUSION

MoPhO is a Mobile Phone Orchestra that uses programmable commodity mobile phones as its primary means of musical expression. Their computational power allows for rich sound synthesis to be performed on the phones on-the-fly and they offer a diverse set of ways to interact - via hand motion detection from accelerometer data, key input, built-in camera as vision-based sensor, and the microphone.

The technology is stable enough that one can start forming both a well-defined ensemble, and create a persistent repertoire. In many ways we see the development of a mobile phone ensemble as a a parallel to the emergence of laptop orchestras. Mobile phones like laptops form a technological basis that can serve as instruments of new music performance, where the engagement with the programmable device itself constitutes the instrument and fuses the teaching of technology and art, and allows new forms of ensemble expression. Some of the properties of the mobile phone orchestra is rather distinct from laptop ensembles. They are rather easy to transport and set up. Mobile phone performances can easily be moved, performed on-the-go, or spontaneously kick-started. The typical power of the speakers of these devices does allow for a chamber music quality of the performance: strong enough for adequately quiet spaces while preserving the intimite instrumental qualities of these devices. Portable speaker attached to gloves, neck or belt can overcome even this limitation and allows intimate yet potent performance in large concert venues or outdoors.

This ensemble is still in its infancy. The first concert in January 2008 provided creedance to that the technology is mature enough to sustain the concept of the ensemble. Since then a number of concerts have been played in Genova, Belfast, Helsinki, San Francisco and Berlin. But there are still many pieces missing. Unlike laptops, there is very limited sound synthesis software available for mobile phones, and interfaces that are accessible to non-programmers are just emerging. There are still many open questions how to best make sensor data mapping, gesture recognition and sound synthesis available to a non-technical performer. Part of the future development will have to be extension of current software in this direction. There are also plans to start another sibling ensemble at the University of Michigan and explore the possibility of joint remote performances, ensemble networking and syncronized, de-syncronized and location based repertoire.

On the artistic side this is of course only a first few steps within this setting. The complexity of pieces is quite open-ended, as location, interconnection, mapping of gestures to musical sound can all diversely contribute to mobile phone ensemble play. Also we look forward to exploring performances with other instruments — acoustic, electric or otherwise.

We believe that this is only the beginning for mobile phone orchestras and are excitedly looking forward to diverse developments of this new emerging medium globally.

6. ACKNOWLEDGEMENTS

This project was possible thanks to the support and enthusiasm of Jyri Huopaniemi of Nokia Research Center, Palo Alto and thanks to Nokia for providing a sizeable number of mobile phones. Jarno Seppanen provided invaluable input into mobile phone programming during a workshop taught at CCRMA in November 2007. Many thanks to Chryssie Nanou, Artistic Coordinator of CCRMA, for guidance and support throughout the project and for setting up the first concert at CCRMA. Many thanks to Brett Ascarelli and Yungshen Hsiao for documentation in the form of pictures and video footage of rehearsals and concerts, and to Rob Hamilton for his excellent support. Last but never least hearty thanks to all the great MoPhO perfomers and co-composers: Steinunn Arnardottir, Mark Branscom, Nick Bryan, Jeff Cooper, Lawrence Fyfe, Gina Gu, Ethan Hartman, Turner Kirk, Adnan Marquez-Borbon, Jonathan Middleton, Diana Siwiak, Kyle Spratt, and Chris Warren. For Belfast, Genova we also featured Ajay Kapur, Adam Tindale, Ian Knopke and Ananya Misra. In Berlin we were joined by Alex Müller, Gesche Joost, Fabian Hemmert, Michael Rohs, Sven Kratz, Nicole Weber, Jan Michael Ihl, Matthias Rath, Constanze Kettliz-Profe, Tilo Westermann, Frederic Gmeiner and Susann Hamann.

7. REFERENCES

[1] A. Anthony. Harmonograph: A Visual Guide to the Mathematics of Music. Walker & Company, NY, USA, 2003.

[2] F. Behrendt. Handymusik. Klangkunst und ‘mobile devices’. Epos, 2005. Available online at: www.epos.uos.de/music/templates/buch.php?id=57.

[3] J. Chowning. The synthesis of complex audio spectra by means of frequency modulation. J. Acoust. Soc. Am., 21(7):536–534, 1973.

[4] P. Cook and G. Scavone. The Synthesis ToolKit (STK). In Proceedings of the International Computer Music Conference, Beijing, 1999.

[5] G. Essl. Circle maps as a simple oscillators for complex behavior: I. basics. In In Proceedings of the International Computer Music Conference (ICMC), New Orleans, USA, November 2006.

[6] G. Essl. Circle maps as a simple oscillators for complex behavior: Ii. experiments. In In Proceedings of the International Conference on Digital Audio Effects (DAFx), Montreal, Canada, September 18-20, 2006.

[7] G. Essl and M. Rohs. Mobile STK for Symbian OS. In Proc. International Computer Music Conference, pages 278–281, New Orleans, Nov. 2006.

[8] G. Essl and M. Rohs. ShaMus - A Sensor-Based Integrated Mobile Phone Instrument. In Proceedings of the Intl. Computer Music Conference (ICMC), Copenhagen, 2007.

[9] R. Fiebrink, G. Wang, and P. R. Cook. Don’t forget the laptop: Using native input capabilities for expressive musical control. In In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), pages 164–167, New York, NY, 2007.

[10] L. Gaye, L. E. Holmquist, F. Behrendt, and A. Tanaka. Mobile music technology: Report on an emerging community. In NIME ’06: Proceedings of the 2006 conference on New Interfaces for Musical Expression, pages 22–25, June 2006.

[11] L. Gaye, R. Maze, and L. E. Holmquist. Sonic City: The Urban Environment as a Musical Interface. In Proceedings of the International Conference on New Interfaces for Musical Expression, Montreal, Canada, 2003.

[12] G. Geiger. PDa: Real Time Signal Processing and Sound Generation on Handheld Devices. In Proceedings of the International Computer Music Conference, Singapure, 2003.

[13] G. Geiger. Using the Touch Screen as a Controllerfor Portable Computer Music Instruments. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), Paris, France, 2006.

[14] G. Levin. Dialtones - a telesymphony. www.flong.com/telesymphony, Sept. 2, 2001. Retrieved on April 1, 2007.

[15] J. Middleton and D. Dowd. Web-based algorithmic composition from extramusical resources. Journal of the International Society for the Arts, Sciences and Technology, 41(2), Accepted for publication in 2008. URL, http://musicalgorithms.ewu.edu/. Last visited: 14-01-2008.

[16] M. Rohs, G. Essl, and M. Roth. CaMus: Live Music Performance using Camera Phones and Visual Grid Tracking. In Proceedings of the 6th International Conference on New Instruments for Musical Expression (NIME), pages 31–36, June 2006.

[17] G. Schiemer and M. Havryliv. Pocket Gamelan: Tuneable trajectories for flying sources in Mandala 3 and Mandala 4. In NIME ’06: Proceedings of the 2006 conference on New Interfaces for Musical Expression, pages 37–42, June 2006.

[18] S. Smallwood, D. Trueman, P. R. Cook, and G. Wang. Composing for laptop orchestra. Computer Music Journal, 32(1):9–25, 2008.

[19] S. Strachan, P. Eslambolchilar, R. Murray-Smith, S. Hughes, and S. O’Modhrain. GpsTunes: Controlling Navigation via Audio Feedback. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services, Salzburg, Austria, September 19-22 2005.

[20] A. Tanaka. Mobile Music Making. In NIME ’04: Proceedings of the 2004 conference on New Interfaces for Musical Expression, pages 154–156, June 2004.

[21] A. Tanaka and P. Gemeinboeck. A framework for spatial interaction in locative media. In NIME ’06: Proceedings of the 2006 conference on New Interfaces for Musical Expression, pages 26–30, June 2006.

[22] A. Tanaka and P. Gemeinboeck. net d`erive. Project web page, 2006.

[23] A. Tanaka, G. Valadon, and C. Berger. Social Mobile Music Navigation using the Compass. In Proceedings of the International Mobile Music Workshop, Amsterdam, May 6-8 2007.

[24] D. Trueman. Why a laptop orchestra? Organised Sound, 12(2):171–179, 2007.

[25] G. Wang, D. Trueman, S. Smallwood, and P. R. Cook. The laptop orchestra as classroom. Computer Music Journal, 32(1):26–37, 2008.

Essl, G. "SpeedDial: Rapid and On-The-Fly Mapping of Mobile PhoneInstruments" In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME09), Pittsburg, PA, 2009.

Gillian, N., O'Modhrain, S. and Essl, G. "Scratch-Off: A gesture based mobile music game with tactile feedback" In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME09), Pittsburgh, PA, 2009.

Wang, G., Essl, G. and Penttinen, H. "MoPho: Do Mobile Phones Dream of Electric Orchestras?" In Proceedings of the International Computer Music Conference (ICMC08), Belfast, UK, 2008.

Wang, G. "Designing Smule's iPhone Ocarina" In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME09), Pittsburgs, PA, 2009.

Weinberg, G., Beck, A. and Godfrey, M. "ZooZBeat: a Gesture-based Mobile Music Studio" In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME09),Pittsburgs, PA, 2009.

Essl, G. and Rohs, M. Interactivity for Mobile Music Making, To appear in Organised Sound.