One of the very first computer music techniques introduced was *additive
synthesis* [3]. It is based on Fourier's theorem which
states that any sound can be constructed from elementary sinusoids, such as
are approximately produced by carefully struck tuning forks. Additive
synthesis attempts to apply this theorem to the synthesis of sound by
employing large banks of sinusoidal oscillators, each having independent
amplitude and frequency controls. Many analysis methods, e.g., the phase
vocoder, have been developed to support additive synthesis. A summary is
given in [5].

While additive synthesis is very powerful and general, it has been held back from widespread usage due to its computational expense. For example, on a single DSP56001 digital signal-processing chip, clocked at 33 MHz, only about sinusoidal partials can be synthesized in real time using non-interpolated, table-lookup oscillators. Interpolated table-lookup oscillators are much more expensive, and when all the bells and whistles are added, and system overhead is accounted for, only around fully general, high-quality partials are sustainable at KHz on a DSP56001 (based on analysis of implementations provided by the NeXT Music Kit).

At CD-quality sampling rates, the note A1 on the piano requires sinusoidal partials, and at least the low-frequency partials should use interpolated lookups. Assuming a worst-case average of partials per voice, providing 32-voice polyphony requires partials, or around DSP chips, assuming we can pack an average of partials into each DSP. A more reasonable complement of DSP chips would provide only -voice polyphony which is simply not enough for a piano synthesis. However, since DSP chips are getting faster and cheaper, DSP-based additive synthesis looks viable in the future.

The cost of additive synthesis can be greatly reduced by making special
purpose VLSI optimized for sinusoidal synthesis. In a VLSI environment,
major bottlenecks are *wavetables* and *multiplications*. Even
if a single sinusoidal wavetable is shared, it must be accessed
sequentially, inhibiting parallelism. The wavetable can be eliminated
entirely if *recursive algorithms* are used to synthesize sinusoids
directly.

In [1], three techniques were examined for generating sinusoids digitally by means of recursive algorithms. The recursions can be interpreted as implementations of second-order digital resonators in which the damping is set to zero. The three methods considered were

- the
*coupled form*which is identical to a two-dimensional vector rotation, - the
*modified coupled form*, or ``*magic circle*'' algorithm, which is similar to (1) but has ideal numerical behavior, and - the
*direct-form, second-order, digital resonator*with its poles set to the unit circle.

These three recursions are defined as follows:

where , , is the instantaneous frequency of oscillation (Hz) at time sample , and is the sampling period in seconds. The magic circle parameter is .

The digital waveguide oscillator appears to have the best overall properties yet seen for VLSI implementation. The new structure was derived as a spin-off from recent results in the theory and implementation of digital waveguides [6,7]. Any second-order digital filter structure can be used as a starting point for developing a corresponding sinusoidal signal generator, so in this case we begin with the second-order waveguide filter.

Download wgo.pdf

Visit the online book containing this material.

Copyright ©

Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

[About the Automatic Links]