User:Francois/Solar Genesis II
Solar Genesis II is a piece that I will be developing during the Music 220C class. This piece follows my compositions for the Music 220A class, Heliocentric Harmony, and for the Music 220B class, Solar Genesis I.
- 1 Final piece
- 2 Project history and inspiration
- 3 Project objectives
- 4 Projec details
- 5 Project blog
- 6 References
The piece was performed on Wednesday, May 31 in the CCRMA stage for the CCRMA Spring Concert. The performance was recorded thanks to David Kerr and is available on Youtube.
Solar genesis II by François Germain, for Disklavier, Cello, and Computer, Sarah Smith (cello), Eric Wu (piano)
This piece is the third of a cycle of pieces started in the previous Music 220A and 220B CCRMA classes.
Immerge yourself along the path of the planets and the stars in this music tale told by the piano and the cello. From chaos and dissonnances emerges the concert of the revolving planets be- fore they disappear, with a universe going back to noisy empti- ness. A big thank you to Fernando Lopez-Lezcano for his invalu- able help and support in this project.
- Overture: Descending motion of the piano, echoed by the sampler (with random source in the WFS) and punctuated by bells (moving between sources in the WFS).
- 1st part: Piano and cello accompanied by several bell (moving between sources in the WFS) and noise moments. The piano is mirrored at one point (with random source in the WFS). Ends with bells.
- 2nd part: Solar system (TubeBell instruments spatialized by Ambisonics) and sampler (arpeggios with random source in the WFS). Ends with bells and destruction of the sun.
- 3rd part: Imitation of the overture, but more dissonant. Shorter evocation of the 1st part with similar event.
Project history and inspiration
This first part of this project, leading to the production of Heliocentric Harmony was the sonification of some concepts of the Harmony of the Spheres discussed in the texts by Pythagoras and Kepler. The system consisted in a solar system where each planet was playing a particular note when an eclipse was happening. The system was spatialized with Doppler effect so that the public would locate the planets around him. This spatialization was made by stereo panning on pairs of loudspeakers.
Solar Genesis I
Solar Genesis I was the first attempt to extend Heliocentric Harmony into a full piece, based on a fictional storyline of the life of the solar system. The piece consists in three main parts picturing the birth, life and death of the sun. The main component is an improvised piano piece on a Disklavier. The MIDI messages were used to control the progression of a computer-generated accompaniement and a visual part. The three component are used to create and reinforce the affects related to each part of the piece. The spatialization existing in Heliocentric harmony was converted to Third-Order Ambisonics to improve the spatial sound impression.
Solar Genesis II is meant to develop the ideas expressed in Solar Genesis I, and especially its affect.
The instrumental piece is meant to feature a duet between the piano (Disklavier) and another instrument played or simulated. The piece will be written with the assistance of Prof. Aquilanti.
Computer Music Accompaniement
The computer music program is meant to play sound sequences clarifying and reinforcing the emotions conveyed by the instrumental piece. The triggering process implemented in Solar Genesis I will be complexified. An alternative triggering event will be investigated for the solar system since the present one has been too difficult for the public to understand.
The visualization already developed for Heliocentric Harmony and Solar Genesis I is meant to evolve into a 3D visualization. Challenges in this process are to develop a new point of view from the 3D perspective since the video won't anymore show the whole system so we would like to avoid empty screens.
The computer-generated sound will be generated by Third-Order Ambisonics (TOA) similarly to what was made in Solar Genesis I, to provide a full 2D sound field. The new Wave Field Synthesis (WFS) system will be used to play with the location of the instrumentists. The exact form of this has not been finalized but two main ideas are on the table:
- Acoustic images of the two performers,
- Virtual second performer interacting with the pianist.
The difficulty in the second option would be to develop a complex but robust synchronization between the performer and the virtual instrument while keeping the spirit of the instrumental piece.
The final piece architecture is showed in the picture. We see that the system is composed of 9 main components:
- QSampler is the virtual "third performer". It receives MIDI messages from Chuck and play back the corresponding piano notes. In general, it echoes or mirrors (with respect to a given pitch) the playing of the pianist on the Disklavier. The sounds are send back to Chuck.
- The Disklavier sends to Chuck MIDI messages corresponding to the piano playing.
- A C++ software implements the physical model for the planets and the movements of the listener. OSC messages sent from Chuck controls the start and stop of the system, as well as the distance of the listener from the sun. The software sends OSC messages to Chuck (planet positions relative to the listener for Ambisonics panning) and processing (planet and listener positions for the display).
- A Processing script generates the video performance. It receives information through OSC from the C++ software (planet and listener positions) and Chuck (sun size, noise intensity and "pitch", planet "performance")
- A Chuck script, core piece of the system, controls the succession of the MIDI events and advances in the computer "score" while the cues arrive from the Disklavier. It starts, generates and stops the planet part of the piece through OSC messages sent to the the C++ software and Processing, sends MIDI messages to the sampler in response to the Disklavier input or the messages corresponding to its "solo" part, controls the Ambisonics panning (according to the information received from the C++ software) and the virtual source used in the WFS array.
- Ambdec decodes the sound received through 7 channels and outputs the spatialization to 8 channels (3rd-order horizontal Ambisonics). On the CCRMA stage, the configuration for the loudspeaker array is given by the octagon preset with configuration 3h0v.
- The SoundScape Renderer (SSR) spatializes the 8 virtual sources according to their preset positions around the WFS array, which it outputs as 32 channels for the WFS loudspeaker array as configured.
- Jack-Mamba developed by Fernando Lopez-Lezcano allows to drive the 32 channel speakers through the Ethernet card that we send to a Digital Mamba.
- The cellist, who is not connected in any way to the rest of the system.
The WFS array has 8 virtual sources spread around the room (though not too far of the array to avoid problems due to room reverberation) that are used by the system (bells, deterministically, and sampler, randomly).
The outputs of the piece are:
- The visual generated by Processing.
- The 8-speaker ring array controlled with Ambisonics by Chuck through Ambdec.
- The 32-speaker linear array controlled by Wave Field Synthesis by Chuck through SSR.
- The Cello and the Disklavier playing the written score.
The control of the piece progression through MIDI is done through the expectation of chains of cues that don't need to be immediatly following each other. In the case of chords, only one cue per chord is used since we don't know in what order the MIDI messages of the chord notes will arrive to the computer. As soon as the cues have been received, an (audio) event is triggered by the Chuck program and the next cue sequence is loaded.
A by-pass message (MIDI NoteOn 119 - a B flat) is implemented to force the playing of the event and the transition to the next sequence. This feature was made for demonstration purposes and wasn't used in the concert setup.
As of June 11, 2012:
- C++ software source and executable (for Linux)
- Processing script source
- Chuck script source
- SSR source geometry configuration
- Jack configuration file to use with aj-snapshot as soon as all the softwares run (user has to manually disconnect Chuck from Midi Through and connect it to QSampler)
|Chuck||chuck --out16 amb.ck echoMidiEvent.ck ssEvent.ck list.ck gong.ck noisy.ck note.ck solarsystem.ck midi.ck main.ck|
|Jack Mamba||jack-mamba -c mamba -d|
|SSR||ssr -c /usr/ccrma/share/studios/listening_room/wfs/listening_room.conf|
Remarks on the commands:
- The C++ software has to be run after the Processing script
- Jack has to run before every other audio part (Chuck, QSampler, Jack-Mamba, SSR, Ambdec)
- Piano samples for Qsampler
- Bell sound file
See the blog here.
Ahrens, Jens. Analytic Methods of Sound Field Synthesis. Berlin: Springer, 2012.
Ahrens, Jens, Matthias Geier and Sasha Spors. The SoundScape Renderer: A Unified Spatial Audio Reproduction Framework for Arbitrary Rendering Methods. Audio Engineering Society Convention 124, 2008.
Stravinsky, I., W. Disney et al. "The Rite of Spring." In Fantasia, 1940.
Risset, J.-C. "Huit esquisses en duo pour un pianiste," In Electronic Music III. Acton, MA: Neuma, 1994.