November 6th, 2013
We aim to make a collaborative, virtual reality environment where different users can enter and create music together, or listen to other sound sequences within the virtual reality created by other users.
There is a virtual human interaction lab on campus (VHIL) founded by Stanford professor Jeremy Bailenson.
A study by a Stanford professor looked at 2,000 children across the country, ages 8-18 years old, and found that they spend 2 hours per day wearing avatars. This is greater than time spent on print media and movies combined.
It's a growing field in terms of relevance and applicability, and many studies are being conducted to determine how people act in such virtual reality environments.
We were inspired by three findings by the Stanford VHIL:
1) Digital anonymity: avatars have made it increasingly say for users to interact anonymously
2) Out-of-body experience: if your virtual self could "feel" in a virtual world the same way your physical self can feel in the physical world, then acting in a virtual environment would become second-nature
3) Transformed social interaction: when you watch behaviors that take place in collaborative virtual environments, interactions and performance between people are enhanced.
(Some or all of the following features will be included)
Milestone 1: Specify more details (like what sounds can be made, what objects populate the world, etc.), a framework (Basic version), and interaction design (human interface devices, how to represent avatars)
Milestone 2: Working implementation of sound sequencing with objects and playback of sound sequences.
Milestone 3: Collaborative aspect