Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

#### Geometric Interpretation of Least Squares

Typically, the number of frequency constraints is much greater than the number of design variables (filter coefficients). In these cases, we have an overdetermined system of equations (more equations than unknowns). Therefore, we cannot generally satisfy all the equations, and are left with minimizing some error criterion to find the optimal compromise'' solution.

In the case of least-squares approximation, we are minimizing the Euclidean distance, which suggests the geometrical interpretation shown in Fig.4.19.

Thus, the desired vector is the vector sum of its best least-squares approximation plus an orthogonal error :

 (5.42)

In practice, the least-squares solution can be found by minimizing the sum of squared errors:

 (5.43)

Figure 4.19 suggests that the error vector is orthogonal to the column space of the matrix , hence it must be orthogonal to each column in :

 (5.44)

This is how the orthogonality principle can be used to derive the fact that the best least squares solution is given by

 (5.45)

In matlab, it is numerically superior to use h= A h'' as opposed to explicitly computing the pseudo-inverse as in h = pinv(A) * d''. For a discussion of numerical issues in matrix least-squares problems, see, e.g., [92].

We will return to least-squares optimality in §5.7.1 for the purpose of estimating the parameters of sinusoidal peaks in spectra.

Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]