Rob Hamilton @ ccrma

Ph.D. Candidate in Computer-based Music Theory and Acoustics
CCRMA, Department of Music, Stanford University

rob [at] ccrma [dot] stanford [dot] edu
publications

Ph.D. Dissertation

Hamilton, R., "Perceptually Coherent Mapping Schemata for Virtual Space and Musical Method", Ph.D. Thesis, Stanford University, 2014.

Our expectations for how visual interactions sound are shaped in part by our own learned understandings of and experiences with objects and actions, and in part by the extent to which we perceive coherence between gestures which can be identified as "sound-generating" and their resultant sonic events. Even as advances in technology have made the creation of dynamic computer-generated audio-visual spaces not only possible but increasingly common, composers and sound designers have sought tighter integration between action and gesture in the visual domain and their accompanying sound and musical events in the auditory domain. Procedural audio and music, or the use of real-time data generated by in-game actors and their interactions in virtual space to dynamically generate sound and music, allows sound artists to create tight couplings across the visual and auditory modalities. Such procedural approaches however become problematic when players or observers are presented with audio-visual events within novel environments wherein their own prior knowledge and learned expectations about sound, image and interactivity are no longer valid. With the use of procedurally-generated music and audio in interactive systems becoming more prevalent, composers, sound-designers and programmers are faced with an increasing need to establish low-level understandings of the crossmodal correlations between visual gesture and sonified musical result both to convey artistic intent as well as to present communicative sonifications of visual action and event. more...

Music in Virtual Worlds

Hamilton, R., "The Procedural Sounds and Music of ECHO::Canyon" In Proceedings of the International Computer Music Association Conference, Athens, Greece, 2014.

ABSTRACT: In the live game-based performance work ECHO::Canyon, the procedural generation of sound and music is used to create tight crossmodal couplings between mechanics in the visual modality, such as avatar motion, gesture and state, and attributes such as timbre, amplitude and frequency from the auditory modality. Real-time data streams representing user-controlled and AI driven avatar parameters of motion, including speed, rotation and coordinate location act as the primary drivers for ECHO::Canyon's fully-procedural music and sound synthesis systems. More inti- mate gestural controls are also explored through the paradigms of avian flight, biologically-inspired kinesthetic motion and manually-controlled avatar skeletal mesh components. These kinds of crossmodal mapping schemata were instrumental in the design and creation of ECHO::Canyon's multi-user multi-channel dynamic performance environment us- ing techniques such as composed interaction, compositional mapping and entirely procedurally-generated sound and music.

Hamilton, R., "Sonifying Game-Space Choreographies with UDKOSC" In Proceedings of the New Interfaces for Musical Expression Conference, Daejeon, Korea, 2013.

ABSTRACT: With a nod towards digital puppetry and game-based film genres such as machinima, recent additions to UDKOSC offer an Open Sound Control (OSC) input layer for external control over both third-person ”pawn” entities, first-person ”player” actors and camera controllers in fully rendered game-space. Real-time OSC input, driven by algorithmic process or parsed from a human-readable timed scripting syntax allows users to shape intricate choreographies of timed gesture, in this case actor motion and action, as well as an audiences’ view into a game-space environment. As UDKOSC outputs real-time coordinate and action data generated by UDK pawns and players with OSC, individual as well as aggregate virtual actor gestures and motion can be leveraged as drivers for both creative and procedural/adaptive gaming music and audio concerns.

Hamilton, R., "UDKOSC: An Immersive Musical Environment" In Proceedings of the International Computer Music Association Conference, Huddersfield, UK, 2012.

ABSTRACT: UDKOSC is a visually and aurally immersive rendered multi-user musical performance environment built in the Unreal Development Kit (UDK), a freely-available commercial gaming engine. Control data generated by avatar motion, gesture and location is routed through a bi-directional Open Sound Control implementation and used to drive virtual instruments within a multi-channel ambisonic sound-server. This paper describes the technical infrastructure of UDKOSC and details design and musical decisions made during the composition and creation of 'Tele-harmonium', an interactive mixed-reality musical work created in the environment.

Hamilton, R., Caceres, J, Nanou, C., Platz. C, "Multi-modal musical environments for mixed-reality performance", Journal for Multimodal User Interfaces (JMUI), Vol. 4, pp. 147-156, Springer-Verlang, 2011.

This article describes a series of multi-modal networked musical performance environments designed and implemented for concert presentation at the Torino-Milano (MiTo) Festival (Settembre musica, 2009, http://www.mitosettembremusica.it/en/home.html) between 2009 and 2010. Musical works, controlled by motion and gestures generated by in-engine performer avatars will be discussed with specific consideration given to the multi-modal presentation of mixed-reality works, combining both softwarebased and real-world traditional musical instruments

Hamilton, R., "q3osc: or How I Learned to Stop Worrying and Love the Game" In Proceedings of the International Computer Music Association Conference, Belfast, Ireland, 2008.

q3osc is a heavily modified version of the ioquake3 gaming engine featuring an integrated Oscpack implementation of Open Sound Control for bi-directional communication between a game server and one or more external audio servers. By combining ioquake3's internal physics engine and robust multiplayer network code with a simple and full-featured OSC packet manipulation library, the virtual actions and motions of game clients and previously one-dimensional in-game weapon projectiles can be repurposed as independent and behavior-driven OSC emitting sound-objects for real-time networked performance and spatialization within a multi-channel audio environment. This paper details the technical and aesthetic decisions made in developing and implementing the q3osc game-based musical environment and introduces potential mapping and spatialization paradigms for sonification.

Hamilton, R., "Maps and Legends: FPS-Based Interfaces For Composition and Immersive Performance" In Proceedings of the International Computer Music Association Conference, Copenhagen, Denmark, 2007.

This paper describes an interactive multi-channel multiuser networked system for real-time composition and improvisation built using a modified version of the Quake III gaming engine. By tracking users' positional and action data within a virtual space, and by streaming that data over UDP using OSC messages to a multi-channel Pure Data (PD) patch, actions in virtual space are correlated to sonic output in a physical space. Virtual environments designed as abstract compositional maps or representative models of the users' actual physical space are investigated as means to guide and shape compositional and performance choices. This paper analyzes both the technological concerns for building and realizing the system as well as the compositional and perceptual issues inherent in the project itself. An extended version of this paper was selected for publication by Springer-Verlang in the refereed post-proceedings of the Fifth International Computer Music Modeling and Retrieval Symposium, CMMR 2007, held in Copenhagen, Denmark in August of 2007.

Mobile

Hamilton, R., Smith, J., Wang, G., "Social Composition: Musical Data Systems for Expressive Mobile Music", Leonardo Music Journal, volume 21, 2011.

This article explores the role of symbolic score data in the authors' mobile music-making applications, as well as the social sharing and community-based content creation workflows currently in use on their on-line musical network. Web-based notation systems are discussed alongside in-app visual scoring methodologies for the display of pitch, timing and duration data for instrumental and vocal performance. User-generated content and community-driven ecosystems are considered alongside the role of cloud-based services for audio rendering and streaming of performance data.

For Jean-Claude: [re]Presenting Duet for One Pianist

Hamilton, R., and C. Nanou, "For Jean-Claude: [re]Presenting Duet for One Pianist", International Computer Music Conference, Montreal, Canada, 08/2009.

In 1989, composer and researcher Jean-Claude Risset's series of interactive sketches for piano and Disklavier entitled Duet for One Pianist explored the performative possibilities made available to pianists through the augmentation of emotive human musical gesture with the precise reactive and computational capabilities afforded by computer-based musical systems. As computer and musical software systems have evolved, the Max software patches created by Risset and researcher Scott Van-Duyne at the MIT Media Lab have been updated and maintained to allow the pieces to be performed using contemporary hardware and software systems. In distinct contrast, Risset's original hand-notated musical score for the work - representing performance notation for the human pianist alongside a varying level of detail representing the computer’s response, itself an integral part in the work - remains the authoritative representation available to performers, researchers and archivists alike. This paper outlines ongoing efforts towards the augmentation of Risset's existing score through the production of a comprehensive multi-voiced notated score edition of Duet for One Pianist, as well as symbolic and data representations for each of the eight works derived from live performance data, and a complimentary and complete series of audio and visual recordings of Duet by pianist Chryssie Nanou.

Sea Songs

Hamilton, R., "Back to the Sea: A Software Realization of Dexter Morrill's Sea Songs" In Proceedings of the International Computer Music Association Conference, Copenhagen, Denmark, 2007.

This paper describes the technical and aesthetic challenges faced in the recent software-based recreation of composer Dexter Morrill's 1995 work Sea Songs for soprano voice, computer-generated tape and Radio-Baton controlled hardware effects-processor. Through careful analysis of the composer's own notes as well as through extensive testing of Morrill's original Digitech TSR-24 stereo-effects processor, a flexible and extensible software emulation of the piece was created as a Max/MSP application.

Bioinformatic Feedbacks

Hamilton, R., "Bioinformatic Response Data as a Compositional Driver," In Proceedings of the 2006 International Computer Music Conference, New Orleans, LA, USA, 2006.

This paper describes a software system using bioinformatic data recorded from a performer in real-time as a probabilistic driver for the composition and subsequent real-time generation of traditionally notated musical scores. To facilitate the generation and presentation of musical scores to a performer, the system makes use of a custom LilyPond output parser, a set of Java classes running within Cycling 74's MAX environment for data analysis and score generation, and an Atmel AT-Mega16 micro-processor capable of converting analog bioinformatic sensor data into Open Sound Control (OSC) messages.

Hamilton, R., "Bioinformatic Feedback: performer bio-data as a driver for real-time composition," In Proceedings of the New Interfaces for Musical Expression conference, IRCAM, Paris, France, 2006.

the jChing

Hamilton, R., "The jChing: an Algorithmic Java-Based Compositional System," In Proceedings of the 2005 International Computer Music Conference, Barcelona, Spain, 2005.

The chance-based compositional techniques utilized by composer John Cage in such works as "String Quartet in Four Parts" and "Music of Changes" made use of a compositional framework of gamuts and gamut squares that serves as the object-model for a compositional software application capable of transforming musical data cells using both chance-based and probability driven functions. Written in Java, the jChing makes use of the MusicXML data format to output transformed musical data in a format compatible with a number of commonly used musical notation applications. This article outlines the functional model and technical specifications for the application and provides basic examples of the jChing workflow.

Hamilton, R., "Rolling the jChing: a Java-based Stochastic Compositional System," In Proceedings of the third annual Spark Festival of Electronic Music and Art University of Minnesota, Minneapolis, Minnesota, USA, 2005.

Hamilton, R., "The Polarized Composer: Addressing the Conflict of Musical Upbringings of Today's Young Composers," In Proceedings of the third annual Spark Festival of Electronic Music and Art, University of Minnesota, Minneapolis, Minnesota, USA, 2005.