Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search


Gram-Schmidt Orthogonalization

Recall from the end of §5.10 above that an orthonormal set of vectors is a set of unit-length vectors that are mutually orthogonal. In other words, an orthonormal vector set is just an orthogonal vector set in which each vector $ \underline{s}_i$ has been normalized to unit length $ \underline{s}_i/ \vert\vert\,\underline{s}_i\,\vert\vert $ .



Theorem: Given a set of $ N$ linearly independent vectors $ \underline{s}_0,\ldots,\underline{s}_{N-1}$ from $ \mathbb{C}^N$ , we can construct an orthonormal set $ \underline{\tilde{s}}_0,\ldots,\underline{\tilde{s}}_{N-1}$ which are linear combinations of the original set and which span the same space.



Proof: We prove the theorem by constructing the desired orthonormal set $ \{\underline{\tilde{s}}_k\}$ sequentially from the original set $ \{\underline{s}_k\}$ . This procedure is known as Gram-Schmidt orthogonalization.

First, note that $ \underline{s}_k\ne \underline{0}$ for all $ k$ , since $ \underline{0}$ is linearly dependent on every vector. Therefore, $ \vert\vert\,\underline{s}_k\,\vert\vert \ne
0$ .

The Gram-Schmidt orthogonalization procedure will construct an orthonormal basis from any set of $ N$ linearly independent vectors. Obviously, by skipping the normalization step, we could also form simply an orthogonal basis. The key ingredient of this procedure is that each new basis vector is obtained by subtracting out the projection of the next linearly independent vector onto the vectors accepted so far into the set. We may say that each new linearly independent vector $ \underline{s}_k$ is projected onto the subspace spanned by the vectors $ \{\underline{\tilde{s}}_0,\ldots,\underline{\tilde{s}}_{k-1}\}$ , and any nonzero projection in that subspace is subtracted out of $ \underline{s}_k$ to make the new vector orthogonal to the entire subspace. In other words, we retain only that portion of each new vector $ \underline{s}_k$ which ``points along'' a new dimension. The first direction is arbitrary and is determined by whatever vector we choose first ( $ \underline{s}_0$ here). The next vector is forced to be orthogonal to the first. The second is forced to be orthogonal to the first two (and thus to the 2D subspace spanned by them), and so on.

This chapter can be considered an introduction to some important concepts of linear algebra. The student is invited to pursue further reading in any textbook on linear algebra, such as [49].5.13

Matlab/Octave examples related to this chapter appear in Appendix I.


Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]

``Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition'', by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8
Copyright © 2024-02-20 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA