CCRMA
next up previous contents
Next: Techniques and Approaches for Computer-Music Composition (Past) Up: Past Research Activities Previous: Controllers for Computers and Musical Instruments (Past)

Audification of Data (Past)


Subsections

Auditory Representation of Complex Data (April 2002)

Jonathan Berger, Michelle Daniels and Oded Ben Tal

We describe our current research on the auditory representation of complex data in which we sonify multidimensional data using an filterbank with noise or pulse train input with the goal of creating an intuitive, easy to learn representation of multiple, simultaneous independently changing parameters. Preliminary experiments suggest a promising model using a subtractive synthesis approach. Our sound examples include sonification of data acquired by marine scientists measuring the salinity and temperature at various depths in the Dead Sea.

The vowel like sounds produced by the filter instrument provides an intuitive point of reference which can be used to measure changing states of data. We noted that fluctuations of dynamic envelope control of center frequency and bandwidth in multiple simultaneous data sets each set with individual components in discrete frequency ranges provide a recognizable auditory representation of the overall trends of each individual data set. The research is supported by the Stanford Humanities Laboratory.

References:

SonART - The Sonification Application Research Toolbox

Oded Ben Tal, Jonathan Berger, Bryan Cook, Michelle Daniels, Gary Scavone, Woon Seung Yeo, and Perry Cook

The Sonification Application and Research Toolbox (SonART) is an open-source effort whose core code is platform-independent. The primary objective of SonART is to provide a set of methods to map data to sonification parameters along with a set of graphical user interface tools that will provide practical and intuitive utilities for experimentation and auditory display. SonART provides publicly available, well-documented code that is easily adapted to address a broad range of sonification needs. The effort builds upon the Synthesis ToolKit in C++ (STK) (Cook and Scavone, 1999), both of whose authors are part of this research effort.

By classifying sonification methods, the SonART provides researchers with the means of exploring parameter mapping with the same high level control afforded by many data visualization packages. Synthesis and sound processing parameters can be classified by general acoustic or musical properties or by synthesis specific parameters.

Parameter mapping using general acoustic properties or synthesis specific parameters is potentially limited by two factors. First, the finite number of parameters used in a given synthesis method limits the dimensionality of the data to be sonified. Second, the mapping of data to a particular parameter may not be intuitive to the data analyst. One approach to address the difficulty of intuiting the meaning of sonified data involves using sounds resembling those in nature. Synthesis techniques that approximate natural sounds, and physical models that simulate material interactions (such as springs and dampers), may prove useful parameter mapping techniques. Instead of mapping a data dimension to an arbitrary synthesis parameter, a mapping that produces intuitive natural sounds such as vowels (Ben Tal, Berger and Daniels, 2001) may be used. Using this approach, vowel quality or the proximity of a sound to a cardinal vowel can provide an intuitive basis for sonification. While natural sounds may provide more intuitive and more easily interpreted results, they introduce complications in terms of how parameter mapping methods are organized. Physical models can be used in such a way that the data miner interacts with data by excitation of a sound that impinges upon data points.



© Copyright 2005 CCRMA, Stanford University. All rights reserved.