In creating Ambisonic compositions, the audio exists as a data set that is represented in a 3D space. The advantages to this architecture is that a piece can be transported to arbitrary locations and be rendered relatively quickly for various speaker configurations maintaining intended spatial properties. Ambisonics soundfield microphones give rise to the ability to record and recreate a "soundfield" in xyzw(w being pressure variable). The wavefronts produced by multi-channel ambisonic playback retains recorded audial cues including room size and reverberent properties. In this experiment it is necessary to define a platform in which a 3d graphical representation and ambisonic composition tools are rendered simultaneously in which the user can choose to spatialize visually or algorithmically. The goal is to convincingly create virtual sources interacting with actual recorded soundfields and simultaneously have the virtual image interact with the actual HD/Imax recorded image.