We can also drop the restriction to microphone/speaker line/plane arrays and allow the speakers to be more generally laid out, such as on a sphere (a typically available layout for ambisonics systems). The basic Huygens principle remains the same: when a wavefront reaches a speaker, it fires out a spherical secondary wave (or hemispherical wave aimed toward the sphere's interior). This configuration was simulated with a plane wave excitation, and a decent plane-wave reconstruction was obtained inside the spherical space. For a spherical speaker array, only half of the speakers become activated by a source outside the sphere.
More generally, when virtual sources are kept away from uniformly laid out speakers, Huygens' Principle tends to hold up pretty well, according to simulation results to date.
It remains preferable to have a reasonable sampling grid for most frequencies, and we continue to prefer a separation plane between the sources and listeners (Fig.1 on page ).
One argument in favor of a separation plane has to do with the fundamental limitation of Huygens' Principle, which only considers pressure (a scalar) and not velocity (a 3D vector). Without velocity matching, a line array of point sources creates a cylindrically symmetric output. A plane array of point sources similarly emits identical wavefronts in both directions away from the plane. If all listeners are on one side of the speaker array, then we don't care what happens on the other side. However, this argument can be overcome.
Since audio loudspeakers are normally baffled, we get an approximate hemispherical source from them, which is ideal in the limit of a continuous distribution of point sources. Also, matching pressure across both time and space implies velocity matching, since ultimately the two are tied together (in a source-free region) by the wave equation, as discussed further in the next subsection.
From our sampling point of view, what we want are speakers having a radiation pattern that serves well as an interpolation kernel (the spatial shape of one sample) for reconstructing a soundfield from its samples in one direction leaving the speaker array. Like Huygens, we want to neglect velocity and work only with pressure samples, but generate the correct velocities indirectly using pressure differentials along the array and across time, when the speakers are close enough for this to work, as they are in a valid sampling grid.
Another argument in favor of a source-listener separation plane is that there can be no standing waves when sound generally propagates from a set of sources to a set of listeners. Standing waves could pose problems if/when our microphone/speaker array happens to line up with a node line. A soundfield is uncontrollable and unobservable at nodes of vibration, leading to possible degeneracies in implementation. A progressive wavefield cannot have these problems.
It can also be taken as a simple design decision that we want our wavefronts to cross a separation plane from sources to listeners. This implies we are not trying to synthesize the reverberant field like WFS, and we set up our arrays in normal acoustic environments, as opposed to the anechoic environments called for by WFS. WFS solves for signals at all speakers enclosing the listening space to produce the desired interior field, even when it contains sources, and including any reverberation. We are less ambitious with Huygens Arrays, and we can expect more robustness and intuition-guided design freedom as a result.