Difference between revisions of "220C Sadlier – Performing with Ambisonics & VR"

From CCRMA Wiki
Jump to: navigation, search
(Week 2)
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Introduction ==
 
== Introduction ==
This wiki page will serve as intermediate space for weekly blog-style updates about my journey in creating my 220C final project. I will soon move these updates to a fancier website so that I can embed samples of my development progress in Unity.
+
This wiki page will serve as intermediate space for weekly blog-style updates about the development of my 220C final project. I will soon move these updates to a dedicated website that will allow me to embed samples of my progress in Unity & PD.
  
 
== Week 1 ==
 
== Week 1 ==
During the first class of week 1, I began considering the form my final project would take and the platforms I would need to realize this project. I have used Max 7 quite a bit in the past and have been interested in PureData's ease of integration (specifically through libpd). I decided I would like to use PureData
+
During the first class of week 1, I began brainstorming final project ideas and compiled a list of platforms I was most interested in working with. I have used Max 7 quite a bit in the past, and PureData's similar paradigm plus its ease of integration with other platforms made it a compelling choice for a primary audio platform to work with. I have also been interested in gaining more experience with Unity's audio tools and experimenting with the platform's basic ambisonics support. After deciding to integrate PureData with a Unity project through libpd, I quickly realized that my project would involve a VR component.
 +
 
 +
The initial project plan I had by the end of week 1 was to create a performance piece for the CCRMA stage where a performer wears VR goggles and interacts with instruments in the virtual environment that create sound in the corresponding location on the real stage. The boundaries of the virtual space in combination with a roomscale positioning system guide the performer to (safely) walk around the stage (and perhaps around the audience). The performer can interact with objects in the VR environment that create localized sound on stage. The audience does not see the virtual world the performer is interacting with but can hear the soundscape of the world in vivid detail through the stage's ambisonics array.
  
 
== Week 2 ==
 
== Week 2 ==
* Discovered VR music creation platform LyraVR. Lyra
+
* Discovered VR music creation platform LyraVR. Lyra shares similarities to my original project design in that it is a VR music studio created in Unity with roomscale support. I want to make sure that the world I build in Unity and the interaction paradigm I design does not overlap too much with what Lyra has already accomplished with their software.
* Emailed the staff at VHIL requesting a tour of their lab space. I am hoping to gain insight into their method of positional tracking for Roomscale VR.  
+
* Emailed the staff at VHIL requesting a tour of their lab space. I am hoping to gain insight into their method of positional tracking for Roomscale VR.
 +
* Began experimenting with example Unity projects that integrate libpd.
  
 
== Week 3 ==
 
== Week 3 ==
* Received an email response from VHIL –– staff is away for the next couple of weeks, but they will be available for a visit at the beginning of May.
+
* Received an email response from VHIL –– staff is away for the next couple of weeks, but they will be available for a visit on April 30th.
 
* Downloaded Wwise SDK for Unity and Linux & macOS development. Wwise offers ambisonics support and suite of spatial plugins for Unity, and I began looking for features unique to Wwise that could accelerate and/or simplify development of the virtual soundscape I will build in Unity.
 
* Downloaded Wwise SDK for Unity and Linux & macOS development. Wwise offers ambisonics support and suite of spatial plugins for Unity, and I began looking for features unique to Wwise that could accelerate and/or simplify development of the virtual soundscape I will build in Unity.
 
* Downloaded FMOD Studio for Unity development. FMOD also supports ambisonics audio albeit in a more limited scope than Wwise––FMOD makes use of Google's Resonance Audio SDK for in-the-box ambisonics support. I plan to experiment with integrating FMOD and Unity sometime this upcoming weekend to
 
* Downloaded FMOD Studio for Unity development. FMOD also supports ambisonics audio albeit in a more limited scope than Wwise––FMOD makes use of Google's Resonance Audio SDK for in-the-box ambisonics support. I plan to experiment with integrating FMOD and Unity sometime this upcoming weekend to
 
* Updated Unity and PureData to most recent versions.
 
* Updated Unity and PureData to most recent versions.
* Began experimenting with example Unity projects that integrate libpd.
+
 
 +
== Week 4 ==
 +
Last week, I discovered the work of Hiroshi Yoshimura through an Outline article titled "The rise of the ambient video game" [https://theoutline.com/post/4181/ambient-video-game-legend-of-zelda]. The piece traces The Legend of Zelda: Breath of the Wild's soothing atmosphere ("the video game equivalent of putting an ambient record on") to the original NES release of LoZ and Yoshimura's ambient album, "Green," both of which strove to channel the natural world through technological mediums. "Green" [https://www.youtube.com/watch?v=D7aYjRl_6Zw] combines field recordings of running creeks, bird songs, and chirping crickets with the artificial timbres of 1980s digital synthesizers. Listening to "Green" on repeat for the past several days  inspired me to think about incorporating a "natural" soundscape into the design of my virtual world.

Latest revision as of 10:38, 26 April 2018

Introduction

This wiki page will serve as intermediate space for weekly blog-style updates about the development of my 220C final project. I will soon move these updates to a dedicated website that will allow me to embed samples of my progress in Unity & PD.

Week 1

During the first class of week 1, I began brainstorming final project ideas and compiled a list of platforms I was most interested in working with. I have used Max 7 quite a bit in the past, and PureData's similar paradigm plus its ease of integration with other platforms made it a compelling choice for a primary audio platform to work with. I have also been interested in gaining more experience with Unity's audio tools and experimenting with the platform's basic ambisonics support. After deciding to integrate PureData with a Unity project through libpd, I quickly realized that my project would involve a VR component.

The initial project plan I had by the end of week 1 was to create a performance piece for the CCRMA stage where a performer wears VR goggles and interacts with instruments in the virtual environment that create sound in the corresponding location on the real stage. The boundaries of the virtual space in combination with a roomscale positioning system guide the performer to (safely) walk around the stage (and perhaps around the audience). The performer can interact with objects in the VR environment that create localized sound on stage. The audience does not see the virtual world the performer is interacting with but can hear the soundscape of the world in vivid detail through the stage's ambisonics array.

Week 2

  • Discovered VR music creation platform LyraVR. Lyra shares similarities to my original project design in that it is a VR music studio created in Unity with roomscale support. I want to make sure that the world I build in Unity and the interaction paradigm I design does not overlap too much with what Lyra has already accomplished with their software.
  • Emailed the staff at VHIL requesting a tour of their lab space. I am hoping to gain insight into their method of positional tracking for Roomscale VR.
  • Began experimenting with example Unity projects that integrate libpd.

Week 3

  • Received an email response from VHIL –– staff is away for the next couple of weeks, but they will be available for a visit on April 30th.
  • Downloaded Wwise SDK for Unity and Linux & macOS development. Wwise offers ambisonics support and suite of spatial plugins for Unity, and I began looking for features unique to Wwise that could accelerate and/or simplify development of the virtual soundscape I will build in Unity.
  • Downloaded FMOD Studio for Unity development. FMOD also supports ambisonics audio albeit in a more limited scope than Wwise––FMOD makes use of Google's Resonance Audio SDK for in-the-box ambisonics support. I plan to experiment with integrating FMOD and Unity sometime this upcoming weekend to
  • Updated Unity and PureData to most recent versions.

Week 4

Last week, I discovered the work of Hiroshi Yoshimura through an Outline article titled "The rise of the ambient video game" [1]. The piece traces The Legend of Zelda: Breath of the Wild's soothing atmosphere ("the video game equivalent of putting an ambient record on") to the original NES release of LoZ and Yoshimura's ambient album, "Green," both of which strove to channel the natural world through technological mediums. "Green" [2] combines field recordings of running creeks, bird songs, and chirping crickets with the artificial timbres of 1980s digital synthesizers. Listening to "Green" on repeat for the past several days inspired me to think about incorporating a "natural" soundscape into the design of my virtual world.