Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

General Conditions

This section summarizes and extends the above derivations in a more formal manner (following portions of chapter 4 of $ \cite{Noble}$ ). In particular, we establish that the sum of projections of $ x\in\mathbb{C}^N$ onto $ N$ vectors $ \underline{s}_k\in\mathbb{C}^N$ will give back the original vector $ x$ whenever the set $ \{\underline{s}_k\}_0^{N-1}$ is an orthogonal basis for $ \mathbb{C}^N$ .



Definition: A set of vectors is said to form a vector space if, given any two members $ x$ and $ y$ from the set, the vectors $ x+y$ and $ cx$ are also in the set, where $ c\in\mathbb{C}$ is any scalar.



Definition: The set of all $ N$ -dimensional complex vectors is denoted $ \mathbb{C}^N$ . That is, $ \mathbb{C}^N$ consists of all vectors $ \underline{x}=
(x_0,\ldots,x_{N-1})$ defined as a list of $ N$ complex numbers $ x_i\in\mathbb{C}$ .



Theorem: $ \mathbb{C}^N$ is a vector space under elementwise addition and multiplication by complex scalars.



Proof: This is a special case of the following more general theorem.



Theorem: Let $ M\le N$ be an integer greater than 0. Then the set of all linear combinations of $ M$ vectors from $ \mathbb{C}^N$ forms a vector space under elementwise addition and multiplication by complex scalars.



Proof: Let the original set of $ M$ vectors be denoted $ \underline{s}_0,\ldots,\underline{s}_{M-1}$ . Form

$\displaystyle x_0 = \alpha_0 \underline{s}_0 + \cdots + \alpha_{M-1}\underline{s}_{M-1}
$

as a particular linear combination of $ \{\underline{s}_i\}_{i=0}^{M-1}$ . Then

$\displaystyle cx_0 = c\alpha_0 \underline{s}_0 + \cdots + c\alpha_{M-1}\underline{s}_{M-1}
$

is also a linear combination of $ \{\underline{s}_i\}_{i=0}^{M-1}$ , since complex numbers are closed under multiplication ( $ c\alpha_i\in\mathbb{C}$ for each $ i$ ). Now, given any second linear combination of $ \{\underline{s}_i\}_{i=0}^{M-1}$ ,

$\displaystyle x_1 = \beta_0 \underline{s}_0 + \cdots + \beta_{M-1}\underline{s}_{M-1},
$

the sum is

\begin{eqnarray*}
x_0 + x_1 &=& (\alpha_0 \underline{s}_0 + \cdots \alpha_{M-1}\underline{s}_{M-1})
+ ( \beta_0 \underline{s}_0 + \cdots \beta_{M-1}\underline{s}_{M-1}) \\
&=& (\alpha_0 + \beta_0) \underline{s}_0 + \cdots + (\alpha_{M-1} + \beta_{M-1})\underline{s}_{M-1}
\end{eqnarray*}

which is yet another linear combination of the original vectors (since complex numbers are closed under addition). Since we have shown that scalar multiples and vector sums of linear combinations of the original $ M$ vectors from $ \mathbb{C}^N$ are also linear combinations of those same original $ M$ vectors from $ \mathbb{C}^N$ , we have that the defining properties of a vector space are satisfied. $ \Box$

Note that the closure of vector addition and scalar multiplication are ``inherited'' from the closure of complex numbers under addition and multiplication.



Corollary: The set of all linear combinations of $ M$ real vectors $ x\in\mathbb{R}^N$ , using real scalars $ \alpha_i\in\mathbb{R}$ , form a vector space.



Definition: The set of all linear combinations of a set of $ M$ complex vectors from $ \mathbb{C}^N$ , using complex scalars, is called a complex vector space of dimension $ M$ .



Definition: The set of all linear combinations of a set of $ M$ real vectors from $ \mathbb{R}^N$ , using real scalars, is called a real vector space of dimension $ M$ .



Definition: If a vector space consists of the set of all linear combinations of a finite set of vectors $ \underline{s}_0,\ldots,\underline{s}_{M-1}$ , then those vectors are said to span the space.



Example: The coordinate vectors in $ \mathbb{C}^N$ span $ \mathbb{C}^N$ since every vector $ x\in\mathbb{C}^N$ can be expressed as a linear combination of the coordinate vectors as

$\displaystyle x= x_0 \underline{e}_0 + x_1 \underline{e}_1 + \cdots + x_{N-1}\underline{e}_{N-1}
$

where $ x_i\in\mathbb{C}$ , and

\begin{eqnarray*}
\underline{e}_0 &=& (1,0,0,\ldots,0),\\
\underline{e}_1 &=& (0,1,0,\ldots,0),\\
\underline{e}_2 &=& (0,0,1,0,\ldots,0)\hbox{, and so on up to }\\
\underline{e}_{N-1} &=& (0,\ldots,0,1).
\end{eqnarray*}



Definition: The vector space spanned by a set of $ M<N$ vectors from $ \mathbb{C}^N$ is called an $ M$ -dimensional subspace of $ \mathbb{C}^N$ .



Definition: A vector $ \underline{s}\in\mathbb{C}^N$ is said to be linearly dependent on a set of $ M\le N$ vectors $ \underline{s}_i\in\mathbb{C}^N$ , $ i=0,\ldots,M-1$ , if $ \underline{s}$ can be expressed as a linear combination of those $ M$ vectors.

Thus, $ \underline{s}$ is linearly dependent on $ \{\underline{s}_i\}_{i=0}^{M-1}$ if there exist scalars $ \{\alpha_i\}_{i=0}^{M-1}$ such that $ \underline{s}= \alpha_0\underline{s}_0 +
\alpha_1\underline{s}_1 + \cdots + \alpha_{M-1}\underline{s}_{M-1}$ . Note that the zero vector is linearly dependent on every collection of vectors.



Theorem: (i) If $ \underline{s}_0,\ldots,\underline{s}_{M-1}$ span a vector space, and if one of them, say $ \underline{s}_m$ , is linearly dependent on the others, then the same vector space is spanned by the set obtained by omitting $ \underline{s}_m$ from the original set. (ii) If $ \underline{s}_0,\ldots,\underline{s}_{M-1}$ span a vector space, we can always select from these a linearly independent set that spans the same space.



Proof: Any $ x$ in the space can be represented as a linear combination of the vectors $ \underline{s}_0,\ldots,\underline{s}_{M-1}$ . By expressing $ \underline{s}_m$ as a linear combination of the other vectors in the set, the linear combination for $ x$ becomes a linear combination of vectors other than $ \underline{s}_m$ . Thus, $ \underline{s}_m$ can be eliminated from the set, proving (i). To prove (ii), we can define a procedure for forming the required subset of the original vectors: First, assign $ \underline{s}_0$ to the set. Next, check to see if $ \underline{s}_0$ and $ \underline{s}_1$ are linearly dependent. If so (i.e., $ \underline{s}_1$ is a scalar times $ \underline{s}_0$ ), then discard $ \underline{s}_1$ ; otherwise assign it also to the new set. Next, check to see if $ \underline{s}_2$ is linearly dependent on the vectors in the new set. If it is (i.e., $ \underline{s}_2$ is some linear combination of $ \underline{s}_0$ and $ \underline{s}_1$ ) then discard it; otherwise assign it also to the new set. When this procedure terminates after processing $ \underline{s}_{M-1}$ , the new set will contain only linearly independent vectors which span the original space.



Definition: A set of linearly independent vectors which spans a vector space is called a basis for that vector space.



Definition: The set of coordinate vectors in $ \mathbb{C}^N$ is called the natural basis for $ \mathbb{C}^N$ , where the $ n$ th basis vector is

$\displaystyle \underline{e}_n = [\;0\;\;\cdots\;\;0\;\underbrace{1}_{\mbox{$n$th}}\;\;0\;\;\cdots\;\;0].
$



Theorem: The linear combination expressing a vector in terms of basis vectors for a vector space is unique.



Proof: Suppose a vector $ x\in\mathbb{C}^N$ can be expressed in two different ways as a linear combination of basis vectors $ \underline{s}_0,\ldots,\underline{s}_{N-1}$ :

\begin{eqnarray*}
x&=& \alpha_0 \underline{s}_0 + \cdots \alpha_{N-1}\underline{s}_{N-1} \\
&=& \beta_0 \underline{s}_0 + \cdots \beta_{N-1}\underline{s}_{N-1}
\end{eqnarray*}

where $ \alpha_i\neq\beta_i$ for at least one value of $ i\in[0,N-1]$ . Subtracting the two representations gives

$\displaystyle \underline{0}= (\alpha_0 - \beta_0)\underline{s}_0 + \cdots +
(\alpha_{N-1} - \beta_{N-1})\underline{s}_{N-1}.
$

Since the vectors are linearly independent, it is not possible to cancel the nonzero vector $ (\alpha_i-\beta_i)\underline{s}_i$ using some linear combination of the other vectors in the sum. Hence, $ \alpha_i=\beta_i$ for all $ i=0,1,2,\ldots,N-1$ .

Note that while the linear combination relative to a particular basis is unique, the choice of basis vectors is not. For example, given any basis set in $ \mathbb{C}^N$ , a new basis can be formed by rotating all vectors in $ \mathbb{C}^N$ by the same angle. In this way, an infinite number of basis sets can be generated.

As we will soon show, the DFT can be viewed as a change of coordinates from coordinates relative to the natural basis in $ \mathbb{C}^N$ , $ \{\underline{e}_n\}_{n=0}^{N-1}$ , to coordinates relative to the sinusoidal basis for $ \mathbb{C}^N$ , $ \{\underline{s}_k\}_{k=0}^{N-1}$ , where $ \underline{s}_k(n)=
e^{j\omega_k t_n}$ . The sinusoidal basis set for $ \mathbb{C}^N$ consists of length $ N$ sampled complex sinusoids at frequencies $ \omega_k=2\pi k f_s/N,
k=0,1,2,\ldots,N-1$ . Any scaling of these vectors in $ \mathbb{C}^N$ by complex scale factors could also be chosen as the sinusoidal basis (i.e., any nonzero amplitude and any phase will do). However, for simplicity, we will only use unit-amplitude, zero-phase complex sinusoids as the Fourier ``frequency-domain'' basis set. To summarize this paragraph, the time-domain samples of a signal are its coordinates relative to the natural basis for $ \mathbb{C}^N$ , while its spectral coefficients are the coordinates of the signal relative to the sinusoidal basis for $ \mathbb{C}^N$ .



Theorem: Any two bases of a vector space contain the same number of vectors.



Proof: Left as an exercise (or see [49]).



Definition: The number of vectors in a basis for a particular space is called the dimension of the space. If the dimension is $ N$ , the space is said to be an $ N$ dimensional space, or $ N$ -space.

In this book, we will only consider finite-dimensional vector spaces in any detail. However, the discrete-time Fourier transform (DTFT) and Fourier transform (FT) both require infinite-dimensional basis sets, because there is an infinite number of points in both the time and frequency domains. (See Appendix B for details regarding the FT and DTFT.)


Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]

``Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition'', by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8
Copyright © 2024-02-20 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA