Swirl: The Sonic World
A Collaborative Music Environment

Date proposed:

November 6th, 2013

Big Idea/ Premise:

We aim to make a collaborative, virtual reality environment where different users can enter and create music together, or listen to other sound sequences within the virtual reality created by other users.


There is a virtual human interaction lab on campus (VHIL) founded by Stanford professor Jeremy Bailenson.

A study by a Stanford professor looked at 2,000 children across the country, ages 8-18 years old, and found that they spend 2 hours per day wearing avatars. This is greater than time spent on print media and movies combined.

It's a growing field in terms of relevance and applicability, and many studies are being conducted to determine how people act in such virtual reality environments.

We were inspired by three findings by the Stanford VHIL:

1) Digital anonymity: avatars have made it increasingly say for users to interact anonymously

2) Out-of-body experience: if your virtual self could "feel" in a virtual world the same way your physical self can feel in the physical world, then acting in a virtual environment would become second-nature

3) Transformed social interaction: when you watch behaviors that take place in collaborative virtual environments, interactions and performance between people are enhanced.

The Thing/ Design:

(Some or all of the following features will be included)

  • The final product is a program users can log in and enter from their own devices
  • Users can choose to represent themselves as an avatar
  • Users can change their vision to either be point-of-view of the avatar, or standing behind their avatar, watching from above
  • Multiple users can be in the virtual reality program from different devices, interacting with each other and the world through their own individual avatar resulting in different sounds
  • Users pick up different objects that represent a different tone. The color/ shape of the object determine the note's pitch, amplitude, and timbre
  • Users can line up note objects next to other note objects in the virtual reality environment
  • When lined up, the note objects create a sound sequence that avatars can play back by walking up to it
  • There can be different sound sources in the virtual environment
  • All objects and avatars move in real time
  • Sound is spatialized according to properties of the virtual room/ environment and the positions of the sources and user
  • Binaural output via HRTF
  • Realistic real-time spatialization inspired by Enzo De Sena et al.'s recent work on "Interactive Auralization for Virtual and Augmented Reality."


Milestone 1: Specify more details (like what sounds can be made, what objects populate the world, etc.), a framework (Basic version), and interaction design (human interface devices, how to represent avatars)

Milestone 2: Working implementation of sound sequencing with objects and playback of sound sequences.

Milestone 3: Collaborative aspect