Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search


Signal Reconstruction from Projections

We now know how to project a signal onto other signals. We now need to learn how to reconstruct a signal $ x\in{\bf C}^N$ from its projections onto $ N$ different vectors $ \sv_k\in{\bf C}^N$ , $ k=0,1,2,\ldots,N-1$ . This will give us the inverse DFT operation (or the inverse of whatever transform we are working with).

As a simple example, consider the projection of a signal $ x\in{\bf C}^N$ onto the rectilinear coordinate axes of $ {\bf C}^N$ . The coordinates of the projection onto the 0 th coordinate axis are simply $ (x_0,0,\ldots,0)$ . The projection along coordinate axis $ 1$ has coordinates $ (0,x_1,0,\ldots,0)$ , and so on. The original signal $ x$ is then clearly the vector sum of its projections onto the coordinate axes:

\begin{eqnarray*}
x &=& (x_0,\ldots,x_{N-1})\\
&=& (x_0,0,\ldots,0) + (0,x_1,0,\ldots,0) + \cdots
(0,\ldots,0,x_{N-1})
\end{eqnarray*}

To make sure the previous paragraph is understood, let's look at the details for the case $ N=2$ . We want to project an arbitrary two-sample signal $ x= (x_0,x_1)$ onto the coordinate axes in 2D. A coordinate axis can be generated by multiplying any nonzero vector by scalars. The horizontal axis can be represented by any vector of the form $ \underline{e}_0=(\alpha,0)$ , $ \alpha\neq0$ while the vertical axis can be represented by any vector of the form $ \underline{e}_1=(0,\beta)$ , $ \beta\neq 0$ . For maximum simplicity, let's choose

\begin{eqnarray*}
\underline{e}_0 &\isdef & [1,0], \\
\underline{e}_1 &\isdef & [0,1].
\end{eqnarray*}

The projection of $ x$ onto $ \underline{e}_0$ is, by definition,

\begin{eqnarray*}
{\bf P}_{\underline{e}_0}(x) &\isdef & \frac{\left<x,\underline{e}_0\right>}{\Vert\underline{e}_0\Vert^2} \underline{e}_0\\ [5pt]
&=& \left<x,\underline{e}_0\right> \underline{e}_0
= \left<[x_0,x_1],[1,0]\right> \underline{e}_0
= (x_0 \cdot \overline{1} + x_1 \cdot \overline{0}) \underline{e}_0
= x_0 \underline{e}_0\\ [5pt]
&=& [x_0,0].
\end{eqnarray*}

Similarly, the projection of $ x$ onto $ \underline{e}_1$ is

\begin{eqnarray*}
{\bf P}_{\underline{e}_1}(x) &\isdef & \frac{\left<x,\underline{e}_1\right>}{\Vert\underline{e}_1\Vert^2} \underline{e}_1\\ [5pt]
&=& \left<x,\underline{e}_1\right> \underline{e}_1
= \left<[x_0,x_1],[0,1]\right> \underline{e}_1
= (x_0 \cdot \overline{0} + x_1 \cdot \overline{1}) \underline{e}_1
= x_1 \underline{e}_1\\ [5pt]
&=& [0,x_1].
\end{eqnarray*}

The reconstruction of $ x$ from its projections onto the coordinate axes is then the vector sum of the projections:

$\displaystyle x= {\bf P}_{\underline{e}_0}(x) + {\bf P}_{\underline{e}_1}(x) = x_0 \underline{e}_0 + x_1 \underline{e}_1
\isdef x_0 \cdot [1,0] + x_1 \cdot [0,1] = (x_0,x_1)
$

The projection of a vector onto its coordinate axes is in some sense trivial because the very meaning of the coordinates is that they are scalars $ x_n$ to be applied to the coordinate vectors $ \underline{e}_n$ in order to form an arbitrary vector $ x\in{\bf C}^N$ as a linear combination of the coordinate vectors:

$\displaystyle x\isdef x_0 \underline{e}_0 + x_1 \underline{e}_1 + \cdots + x_{N-1} \underline{e}_{N-1}
$

Note that the coordinate vectors are orthogonal. Since they are also unit length, $ \Vert\underline{e}_n\Vert=1$ , we say that the coordinate vectors $ \{\underline{e}_n\}_{n=0}^{N-1}$ are orthonormal.



Subsections
Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]

``Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition'', by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8.
Copyright © 2014-10-23 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA