A full media experience envolves audio and visualization. Is possible to see what we are hearing? Or hear what we are seeing? Is possible to sonify a scene?
Develop an system that can read some 3D data, show it on screen and finally sonify it.
How to snofy it? Well, my aproach was using hikers(should I say balls?) that walk (should I say roll?) over the surface in different trajectories. Their position control the sounds in a 3D space.
For the scope of the 256 project, beside the visualization of the scene using OpenGL, the sound will be generated in the same C++ application, using only 2 channels. The application will also send OSC messages to control ChucK instruments in a 3D space (using the Listening Room). That's the 220a final project.
I'd like to implement some additional functionalities such as:
If you think that other feature would be fun or useful to see, please send me an email.