Below is a rather complex, not so elementary example. It has to do with spatialization of sound sources. Here we are posting code mainly for legacy and historical reasons, in addition to outlining more features of Snd's s7. Complexity gives flexibility at the expense of a rather not so steep (hopefully) slope. With some time and insight a great deal can be learned from all programming shown here. If there is some intimidation, reader should feel free not to tackle this subject and implementation now, on the hope of coming back later.
On this web page, we are presenting an instance of a theory and its applications using a computer model. Though it should be acknowledged that a complete discussion of this subject matter is well beyond the scope of these introductory pages. Here we are presenting elementary concepts pertaining to spatialization and motion of a sound source, given a path in a twodimensional plane with expectations that readers can build upon, and extend its use. Hopefully to go deeper on now accessible sound diffusion techniques such as Ambisonics, VBAP, and perhaps wave field synthesis. On understanding code here, reader would at least get some curiosity on topics as follows: spatial perception of sound, intensity panning, Doppler Effect, motion of sound sources, reverberation, Lissajous Figures, and localization of sources as well as structure and functional programming using Scheme.
Using Lissajous Figures in a musical context is a technique pursued by John Chowning and others on the days of SAIL, Stanford Artificial Intelligence Laboratory. A complete recount of this research is on a Computer Music Journal paper titled “The Simulation of Moving Sources” published by John Chowning. It is advisable for readers interested on this subject to get their hands on this paper. Insight on Turenas, among pioneering Computer Music pieces is also recommended. Although widely available as a stereo recording, multichannel renditions can also be obtained. Most certainly by listening to this piece people get acquainted with the nature of acoustical space manipulation.
Code below makes use of J. Chowning's Lissajous equations, but this time using a white noise sound source so the effect is better perceived. Doppler is added so that a person in a sweet spot listens to sounds comingandgoing. Keep on mind that Doppler is function of distance, as well as speed of source which is also dependent on time. Reverberation is also an intricate part thereby giving the illusion of an enclosed space as a listening environment. More on the subject is widely available on publications all around. There is even a Stanford course on the subject of “Sound in Space”taught by Fernando LopezLezcano who has been researching the subject for years now. Among other people contributing deeply to the field worth mentioning are Dick Moore, Gary Kendall, Pablo di Liscia, Juan Pampin, Joe Anderson, Pablo Cetta, just to name a few. Before getting into the code, a bit of theory as outlined on Dick Moores's book “Elements of Computer Music” might prove helpful for better understanding these processes. Seems worth to remark that code for below application of intensity panning using J. Chowning's Lissajous functions was based on Dick Moore's CMusic panning unit generator written in “C” language.
“In the production of computer music, the typical problem is not to simulate a particular concert hall or room with any precision but to impart a spatial quality to sounds generated either by modification of recorded sounds or by methods of pure synthesis.”
Dick Moore's quote above outlines “spatial quality” to perception of sounds. This can also be portrayed as sound coming from everywhere on three dimensions. On this portrait localization of sounds becomes a composition parameter. For this it is necessary to consider the processing of signals both by digital signal processing(dsp) and by our own hearing mechanism. Paraphrasing Moore again, we define the quest for sound spatialization in various ways:
The problem of sound spatialization is then the problem of gaining prescriptive control over positions of virtual sound sources within an imaginary, virtual, or illusory space in which such events may occur. For this we need to keep in mind that:
In regards to intensity it should be said that sounds in the real world coming from directly in front of, or behind the listener, reach both ears with equal intensity, while those coming from the right or left reach one ear with slightly more intensity than the other. A general impression of directional intensity may be simulated through the use of “intensity panning”. While using a multichannel speaker system, we can provide ideal intensity cues only for directions defined by positions of the speakers. At azimuth angles intermediate between any two of these directions, we can distribute sound between adjacent pairs of loudspeakers.
Generally, we can control the intensity of a sound in each playback channel by using a gain factor that is multiplied directly to the waveform of the sound undergoing spatialization. Because such a gain factor multiplies the waveform directly, it represents a direct control on the amplitude rather than its intensity. Recall that intensity is also a function of distance. Thus intensity changes with distance following the inverse square law of intensities, where sound is inversely proportional to the square of its distance from the source to the listener.. Therefore amplitude is proportional to the square root intensity for linear amplitudes.
To maintain a constant sound intensity at the listener's position for all intermediate positions of the virtual sound source as it pans from left to right, we require that the total intensity be constant. On mathematical terms we can think of a relation like , where'g' and 'h' are the respective gains of each speaker and where 'K' is a constant. If we take look at, , we see that the squares of sine and cosine are equal to 'one' which is always constant.
Therefore by using this relation, we can always guarantee that at every angle theta with the azimuth of a sound source, sum of its squares will always be one and a constant. But keep on mind that intensity proportional to the inverse square of distance. In order to create a realistic illusion while using intensity panning we need to add the distance component. Distance here is the distance between the listener and the loudspeaker. For a realistic pan we need to use the following equations:
where 'g' is gain factor of each loudspeaker, theta azimuth angle and'd' is distance. A physicist description of intensity outlines:
Energy from the motion of sound waves flows through the eardrums and into the inner ear where is registered as sound. Intensity 'I' is the energy 'E' per unit of time 't' that is flowing across a surface of a unit area 'a'. Therefore , where . Power for this purpose is equivalent to the amplitude of a sound.
For a circular pan without a hole in the middle we can use the following equations:
Now above equations need to be implemented. For this purpose a stereo panning program needs to coded. Below is our first elementary “stereo” intensity panning program. Only hack here is that we need to adjust angular phase starting on pi/4 so that sound starts moving from one loudspeaker to the other. A constant called 'cfactor' stores the value of square root of two over two, which is also equal to sin(pi/4) and cos(pi/4). Program is commented to help understanding what is going on.

Above program pans a simple sine wave into two “Stereo” channels. Here we are adding useful features which make it longer but they add more flexibility. Instead of going from left to right, we can make sound go around several times by toggling 'cycles' parameter. A value of two is two complete rounds. A value of four is four rounds. Likewise a direction parameter can also be toggled. Notice that equations are implemented inside the main loop. Azimuth is incremented on a sample rate level so that signal smoothly goes from one channel to the other. Keep on mind that angular phase here is changing as time goes by.


 Listen to the sound going from one speaker to the other in a circular way.
 We can set the number of channels variable '*clmchannels* to default to two channels (stereo sound):

Let's try to make the sound go around twice,

And finally here we are making several rounds on the opposite direction;

Below a screenshot of Snd's showing a Stereo soundfile generated by function calls above.
On the image above a sound file with two channels can be seen. Notice that phase on each channel is different. This means that the intensity one channel is different from the other but if you add them together at every point, they add up to a constant, here the absolute value of one 1. Following a perception standpoint, a sound can be heard first coming from side and then going to the other. Because of inverse square relationship, this effect seems like circular motion illusion. Here a sound can be heard making a round or perhaps several rounds.

LISSAJOUS FIGURES
———————————————————
Lissajous
Figures, familiar to most physical
scientists and engineers, connotes harmony, order and
stability. Lissajous figures are named after French mathematician
Jules Antoine Lissajous, but are also known as Bowditch curves after
Nathaniel Bowditch, a mathematician from Salem, Massachusetts, who
discovered them around 1815.Lissajous figures were sometimes displayed
on oscilloscopes meant to simulate hightech equipment in
sciencefiction TV shows and movies in the 1960s and 1970s. Lissajous
curves are the family of curves described by the parametric equations:
With these equations we get x/y pairs that plot on
rectangular coordinates producing curves depending on parameters for
factors and angles of the above equations. MatLab or Octave for that
matter are very useful for plotting Lissajous Curve. See here.
 But, what about these figures?. John Chowning had been experimenting with drawing tools on early computer devices to get points for creating motion of sound using azimuth and Doppler shift parameters. Motion of the source from one point to other will be give a difference in location. By using a mouselike device he was able to control points that generated graphic output though no sound. In this way there was not near actual interaction with sound and therefore mapping parameter from graphs to composition tended to be a cumbersome method. But because of being in a laboratory environment such at SAIL, David Poole, another researcher at the time, pointed out that these drawing patterns looked like a Lissajous figure. Then curiosity was sparked by his comment leading Chowning to learn about and program Lissajous figures.
“I quickly advanced through the wellknown looping patterns and discovered that interesting figures could be generated, and whose sound manifestation possessed a graceful motion that seemed to me natural —as the sound followed the path of a Lissajous figure it decelerated and accelerated as it approached and left a change in direction.”
In regards to Turenas and spatial motion of sound sources Chowning goes further on Lissajous figures:
“In Turenas I made full use of the newly acquired control of sounds in space. The spatial trajectories areboth curvilinear and linear motions. The linear trajectories are sometimes expressedby radical changes in timbre as the sounds pass through the listener space. Thus, computer synthesis allowed me to achieve synchronous control over spatial trajectories and timbral transformations.”
Here are John Chowning's Lissajous equation having sine and cosine components:
Below is a graph of these Lissajous Equations on the x/y plane:
Interestingly enough this image shows four corners, each one more or less at 45 deg. where loudspeakers on a four channel twodimensional system are located. We can see that traces here, outline sound paths from one quadrant to the other or even from the center outwards, and so forth. For J. Chowning a diameter, if we could draw a circumference, was well beyond 10 mts. on an illusory space beyond ithe inward space from the listener and the four loudspeaker. Therefore there are paths which are longer than others thus consequently changing time and speed of sound source accordingly. Intensity gain or panning on this four sided spaces changes with traces outline on this Lissajous figure. With other localization cues such as Doppler and reverberation listeners would perceive sounds comin and going, from one side to the other.
Coming back to the question posted before as how we go from twochannel “stereo” to fourchannel and perhaps beyond, it is important to keep on mind that listening space is 360 deg. and that any sound source will come somewhere around this circle. To cue an angular position an energy ratio (gain or attenuation) is applied to the direct signal on each loudspeaker pair. Since we have four quadrants, angles between loudspeaker pairs are 90 deg. in relation to the listener. The obvious means of changing ratio of the direct signal for the moving source, is to make the energy applied to the loudspeakers pairs proportional to the angle of displacement. Very much like we did on the two channel example but after calculations for channelone and channeltwo, we calculate energy ratios for channeltwo and channelthree, and so forth. For this purpose we need to come up with a function tha helps to find out energy ratios for each of the four loudspeaker. We can start with th following equation as stated on by Dick Moore on Elements of Computer music:
Above Gn is gainofspeakern, theta is position of sound source and thetan is angle of loudspeakern. This function is implemented on s7 scheme as follows:

Function above takes the angle where the sound source is, the angle of the standing speaker in addition to the inverse square of distance. From our main program we will call this function to calculate gain for each speaker depending upon position. Note that above function gives gains for positions in between loudspeakers. Recall that gain is proportional to distance, therefore gain here is among localization cues.
Furthermore, in order to simulate the distance cue, a reverberant signal should be synthesized in addition to a direct signal (as above), such that the intensity of the direct signal decreases more with distance than does the reverb signal. Thus, as more distance from the listener, overall gain or amplitude is attenuated. It is assumed that in a small space, the amplitude of reverb signal produced by the sound source at constant intensity but varying distances from the listener changes little, but in a large space it changes somewhat.
As for motion or sound source and velocity cues, we know that a static listener receives velocity information from a moving sound source at a rate of change proportional to speed. Similarly a frequency shift can also tell if source is getting closer or getting away. This because of Doppler shifts better known as Doppler Effect. The simulation of the Doppler effect is achieved, simply, by scaling the unit of distance in meters and making change in frequency proportional to the rate of change of distance over time. Our implementation of Doppler shifts is done by using delay lines. We get frequency changes by changing the length of a delay line. There is a delay line for each loudspeaker. Length is calculated from the distance of the sound source to the listener.
As pointed before reverberation is also an essential part of the distance cue. Perceived reverberation also gives size of room and space information, including shape. Plainly speaking reverberation, might be defined as the persistence of sound after its excitation. Quantitatively is regarded as the collection of reflected sounds from the surfaces in an enclosed space such like an auditorium. Direct sound received is followed by distinct early reflected sounds and then a collection of many tail reflected sounds which blend and overlap giving characteristics of an auditorium or even virtual space. John Chowning again points out: “ In simulating a sound source in an enclosed space, then, it is desirable for the artificial reverberation to surround the listener and to be spatially diffuse.”
Below is our intensity panning program written on Snd's s7. In addition to Lissajous Figures as a basis for spatial gestures, this code implements above and just described features for obtaining cues for localizing a sound around a space. It is assumed that listener is on the center (sweet spot) and surrounded by a set of loudspeakers, each at a fixed distance from the listening spot. It is important to point out that position of sound source changes with time and depends on angular position inside a 360deg. circle. Soundpath gestures are performed beyond the perimeter outlined by loudspeakers on a bigger illusory space behind. Note that Lissajous image above shows several cycles (iterations) of Lissajous equations. In order to achieve full gestures we don't need complete iterations of the equations.


 Alternatively you can download it HERE.

 Listen to the sound going from one speaker to the other.
 We can set the number of channels variable *clmchannels* to default to four channels:

Let's try to make more gestures. Keep on mind that duration's or length of traces are function of time. The more cycles, the more but shorter traces.

Now we need to add some “reverb”. There are several kinds of reverbs on Snd. You can take a choice between “nrev”, “jcreverb” or “freeverb”. For now let's choose “freeverb”.


Listen to the full location cues by adding “freeverb” reverb:

J. Chowning Lissajous equations tend work better on lengthier sounds. Below an example of several traces.


© Copyright 20012022 CCRMA, Stanford University. All rights reserved.
Created and Mantained by Juan Reyes