CCRMA
next up previous contents
Next: Music Analysis and Visualization Up: Research Activities Previous: Machine Recognition in Music

Audification of Data


Subsections

Acoustic Remapping of Digital Data

Jonathan Berger (PI), Oded Ben Tal, Ryan Cassidy, Chris Chafe, Ronald Coifman (Yale), Mauro Maggioni (Yale), Shihab Shamma (University of Maryland), Julius Smith, Woon Seung Yeo, Fred Warner (Yale), and Steven Zucker (Yale)

The purpose of this project is to develop methodologies for geometric translation of high dimensional digital data into perceptual spaces . In particular we will translate regions in parameter space to sound, such that the auditory perceptual distance between two points would correspond (approximately) to geometric distance in the parameters.

This project involves the mathematical design of appropriate dimensional reduction and filtering algorithms and design of appropriate sound synthesis and processing strategies to effectively elucidate desired sonified features, patterns or attributes in the data.

The proposed research includes three components of sonification research:

This research is funded by DARPA.

SoundWIRE: Sound Waves on the Internet from Real-time Echoes

Chris Chafe, Scott Wilson, Randal J. Leistikow, Gary P. Scavone, Daniel Walling, Nathan Schuett, Christopher Jones, and David Chisholm

New, no compromise, computer applications for audio have been demonstrated using a simplified approach for high quality music and sound streaming over IP networks. Audio is an unforgiving test of networking - if one data packet arrives too late we hear it. Traditionally, compromises of signal quality and interactivity have been necessary to avoid this basic fact. Along with our new professional audio applications we have developed SoundWIRE, a utility which affords an intuitive way of evaluating transaction delay and delay constancy. Its final form is an enhanced ``ping" that uses actual sound reflection. A musical tone, such as a guitar pluck, can be created by repeatedly reflecting a digital acoustic signal between two hosts. Using the network delay between these reflections to substitute for a guitar string creates a tone whose stability represents perfectly regular service and whose pitch represents transmission latency. The ear's ability to discern minute differences makes this an unforgiving test of network reliability.

References:

Triple Jam

Chris Chafe

Long-haul experiments in remote music jamming have so far been limited to pairs of locations (e.g., Stanford, UCLA; Stanford, New York; Stanford, Montreal; Stanford, Zurich). A three-sided session is being engineered to join players in the western half of the continent (Stanford, Victoria, Missoula). Revision of the current software system will include several new features: GUI-based control, interface to Jack audio subsystem, continuous monitoring of latency and buffers, new protocol features and multicast.

See http://ccrma.stanford.edu/groups/soundwire/cybersimps/ for recordings of a recent (2-sided) "CyberSimps" performance between Stanford and UCLA.

Musical Application of Image Sonification Methods

Woon Seung Yeo and Jonathan Berger

To utilize visual information for musical purpose, inevitable time-based nature of sound should be understood and considered. Time is the principle dimension within which all other auditory parameters are placed, and this poses a particular challenge to effective sonification of time-independent images and their applications to music.
To provide a framework for conceptualizing mappings of static data to the time domain, we present two concepts of time mapping, scanning and probing, with a careful consideration of geometric characteristics of images for defining meaningful references in time. We then proceed to combine scanning and probing methods to model human image perception mechanism, which can be implemented with SonART and utilized as a tool for musical creation and performances.

Reference:

SonART: A Framework For Data Sonification, Visualization and Networked Multimedia Applications

Woon Seung Yeo, Zule Lee, Greg Sell, and Jonathan Berger

The Sonification Application and Research Toolbox (SonART) project is now in a totally new phase: a flexible, multi-purpose multimedia environment that allows for networked collaborative interaction with applications for art, science and industry.

SonART provides an open ended framework for integration of powerful image and audio processing methods with a flexible network communications protocol. An arbitrary number of layered canvases, each with independent control of opacity and position, can transmit or receive data using Open Sound Control (OSC). Data from images can be used for synthesis or audio signal processing and vice versa. Latest developments include implementation of real-time visual filters both for still images and animations.

Applications include multimedia art, collaborative and interactive art and design, and scientific and diagnostic exploration of data.

Reference:



© Copyright 2005 CCRMA, Stanford University. All rights reserved.