is a multiplayer virtual environment for creating ambient music through physical metaphors of movement like flying and diving.
In it, you are free to soar, swoop, splash, and float through an endless sky and deep ocean, with your movements leaving trails of sound behind you.
These sounds are a history of your movement, and their timbre and pitch are completely dependent on how you created them.
Others can join you in this non-place and play along, playing with and against your sound-movements.
I really enjoy some of the ambient composition apps/instruments that I've played with on the iPhone/iPad, like Bloom & Soundrop. These give a really pleasing sense of cooperation with the software, where you can express somewhat vague or slight intent and be rewarded with beautiful and complex sounds.
I also really love the feeling of free movement in mobility-focused first-person games like Tribes, where you have a jetpack and can "ski" along the hilly terrain to gain speed. I see the various ways of moving through those environments as an expressive form and want to try to translate that into sound.
Technically, Flying Dream is a "first-person mover" (FPS with no guns) in which you can control your height as freely as your x position. It's almost fair to call it a flight simulator, except that there's no imaginary plane. Gravity still acts on you, and if you fall into the ocean you can swim around underwater indefinitely, and come back to the surface whenever you want.
Aesthetically, immediately upon starting Flying Dream, you appear high in the air above a huge ocean that goes off in all directions, and you start to fall. The sun is shining and fluffy clouds dot the sky around you. Wind whistles past you quietly as you fall, and you can either start to fly or let yourself splash into the water. Once you decide to start moving, you can swim or fly freely in any direction, with gravity pulling you down but not strongly enough to prevent you from gaining altitude. When you press and hold a key, you begin to leave a trail behind you as you move, and also you hear a sound which corresponds in pitch, timbre and loudness to your movement. Everything about how you move, and through what part of the environment you are moving, has an impact on the sound that is generated. After some time has passed, that sound will play again, more faintly, as the trail fades away over time. In this way you can create a changing ambient soundscape as you add new sound-movements and old ones fade away.
The controls are roughly similar to an FPS with the except that moving "forward" and "backward" increase your velocity on the vector that you are looking along, instead of just in the horizontal direction. There may or may not be "strafing".
There are no menus or anything in Flying Dream, you just start playing from scratch each time. There may need to be some menu-like mechanism for connecting to other players' games, but in general the application has only one mode of interaction.
The sound synthesis in Flying Dream will be handled by a number of UGens, chained and mixed together. The UGens' parameters and levels in the main mix will all be dynamically controllable, and there will be a translation layer above them. This layer will take information about the player's avatar (position, facing, velocity, instantaneous acceleration, instantaneous rotational acceleration, etc) and maps those to the UGen parameters.
While the "sound" key is held down, sound will be generated and a sound-movement trail will be left in the environment, marking the player's movement. In addition to having the player's physical information fed directly into the translation layer for immediate playback, an object will be persisted which represents that sound-movement, so that it can be replayed in a loop by the system. Because this object will contain exactly the necessary and objective information from the physics simulation, it can be fed directly into the audio translation layer as well.
These persistent sound-movement objects are also fed into a separate graphics translation layer which is used to render them in a way correspondent to the motion that generated them. For xyz position, this translation is trivial, but for things like velocity, rotational velocity, etc the mapping is less obvious.
The existence of these translation layers will allow me to modularly tweak the visual and audio output of the system without unnecessarily touching the synthesis or physics code.
Physics - gravity, movement, bounded position
Control - keyboard/mouse input handling, mapping to physics system
Audio Synthesis - UGens, realtime playback
Physics -> Audio Mapping - translation classes, in-game tweakable parameters
Graphics - scene rendering, sound-movement trail rendering, player rendering, mouselook
Physics -> Graphics Mapping - translation classes, in-game tweakable parameters
Persistence/Time/Playback - movement objects persistence, looping playback, fading
Environment Simulation (optional) - triggering/controlling levels of ambient sounds based on physics and time (physics: wind noise, water noise) (time: random events like seagulls, waves)
I will test the app by sitting down everyone I can get my hands on in front of it, explaining only the basic premise, and measuring a couple things:
* How long it takes them to figure out how to control themselves (to see whether the controls are intuitive)
* How long they play for (to see whether it's immersive/engaging)
* Whether and how much they smile/laugh (to see whether it's fun)
* Whether they repeat the same actions a lot (to see whether they are learning)
* How much of the game's possible sounds/controls they actually explore/experience (to see what comes naturally and what needs to be made more obvious)
1. Moving and generating sounds in a dimensionless black void
2. Drawing the environment, gravity, environmental sounds, looping & fading.
3. Networking, Testing & Polish
- Hold a key to make sound?