Huygens Octave Panels (HOP)

In audio, octave-based resolution is very common. If the upper limit of the audio bandwidth is taken to be 20 kHz, then the top octave covers 10-20 kHz, the next octave down is 5-10 kHz, and so on, giving crossover frequencies at 10, 5, 2.5, and 1.25 kHz, and 612, 312, 156, 78, and 39 Hz, which can be taken as the crossover to the lowest octave including 20 Hz. Thus, the complete audio spectrum spans 10 octaves, and the 9 crossover frequencies are given in kHz by , where is the octave number counting down from the top.

It is common to use a subwoofer to take the low end (spanning 20-80
Hz for THX, 20-100 Hz for ``pro'', or 40-200 for ``consumer''
quality level, etc.^{27}),
so that only 8 (starting from 78 Hz) or 7 (from 156 Hz) octave bands
are needed from a multiresolution array.

It is instructive to look at the wavelengths in each band, since each speaker-driver diameter needs to be on the order of a wavelength. Let's define the center-frequency of each octave as the geometric mean of its limits, so , , which gives

for the center frequencies in Hz. Then for a speed of sound m/s, using , we obtain the center-frequency wavelengths to be

cm

or

in

or

ft.

We see that even consumer quality level wants a lowest-octave driver diameter on the order of 10 feet, and THX and pro quality want a 20-foot cone! Practical systems rarely go for such large drivers. Instead, we settle for the top five or six octaves, and drive the lowest octave with additional gain to get the desired power level. In other words, the low-end speaker(s) operate in a ``rolling off'' region, driving only a fraction of a wavelength, and so they require a 6 dB boost for each halving of frequency in that zone. We can make up for driving less than a wavelength by using extra power, to the extent no audible turbulence is generated.

Another point to consider is that sound localization is only
meaningful down to a fraction of a wavelength, and wavelengths much
larger than our heads cannot be localized based on the steady-state
soundfield, because our ears get nearly identical signals from the
field.^{28}We generally localize a sound based on its higher frequency
components, such as above 500 Hz where azimuth changes cause
noticeable Interaural Intensity Differences (IID), i.e., audible
``head shadowing''. The localization determined during a sound's
onset, which normally has the most high-frequency content, tends to
persist psychologically even after the high-frequencies that localized
it have faded away.^{29}

Download huygens.pdf

[Comment on this page via email]

`http://arxiv.org/abs/1911.07575`

.Copyright ©

Center for Computer Research in Music and Acoustics (CCRMA), Stanford University