TORUS

Isabelle Lee

Music 256A / CS 476A Final Project








Torus is a live audiovisual performance tool, where the user can record, loop, and layer sounds. It is meant to feel as if you are in a museum exhibition surrounded by blank walls, with the art installation in front as the focal point. The installation, however, is a living and breathing one, coming to life as you create.

I created this because I enjoy making music and have spent much of my life dancing. Music and movement have been the most prevalent forms of artistic expression for me, but they have existed disjointly - I have never danced to music that I made, and I have never made music to accompany my movement. I saw this project as an opportunity to realize the intersection of my two creative processes for two different modes of expression. I wanted to create both my own sounds and my own movement in a completely self-made, unified space. I wanted to be able to create the sounds and the music that dictated my movement.

Instructions: Reverberation can be altered depending on the sound's z-coordinate (depth) and playback rate can be toggled with the slider. Click a wall to "place" the sound and begin recording. A metronome will play while you record. Press backspace to delete a previously recorded sound. Press the spacebar to enter performance mode and "sketch" your movements onto the blank canvas on the wall. The demo video did not record the spatial audio capabilities of Torus, unfortunately, but the sounds will pan around you to mimic the visual of obiting particles.

References: I used the OpenCV plus Unity asset to integrate OpenCV's computer vision capabilities into Torus - this allows for the edge detection that produces the sketch-like visual when you enter performance mode. I utilized code from @qian256 on GitHub for the edge detection capability.

Project Files: Zip File

Project Build: Executable


Milestone 2


The slider on the bottom controls playback speed and the depth of the room of where you place the sound relative to your position corresponds to the intensity of the reverb. Placing the sound on either the left or right walls was supposed to control panning to the left and right channels respectively, but I realized Chunity was mono - I still need to figure out how to work around that. I struggled a lot with the recording and playback, but mainly because I did not know what LiSa was. Once I got that figured out and went through the LiSa tutorials, I had a much better time. As of now, I'm looking to solve the mono/stereo issue, and also add in interesting parameters that can be manipulated if you place the sound on the floor/ceiling.


Milestone 1



The idea of this project is to be able to live-record sounds and vocals via ChucK and export them as .wav files that can then be read by sound buffers and played back to the user. The user will also have the ability to "place the sound in space", where its location will correspond to audio paramters that can be manipulated like LFO, pitch, and 2D panning. I struggled a lot with this initial milestone, mostly because I started and then scrapped a couple ideas and ran out of time. As of now, I am able to record sound via ChucK, but I'm having issues with audio playback to the user. :( The user controls of toggling between recording mode and performance mode are in place, as well as being able to place the sound, but need to be refined.


Brainstorm/Proposals