Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search


Modified Discrete Cosine Transform (MDCT)

The MDCT is a linear orthogonal lapped transform, based on the idea of time domain aliasing cancellation (TDAC). It was first introduced in [3], and further developed in [4].

MDCT is critically sampled, which means that though it is 50% overlapped, a sequence data represented with MDCT coefficients takes equally much space as the original data. This means, that a single block of IMDCT data does not correspond to the original block, on which the MDCT was performed, but rather to the odd part of that. When subsequent blocks of inverse transformed data are added (still using 50% overlap), the errors introduced by the transform cancels out $\Rightarrow$ TDAC. Thanks to the overlapping feature, the MDCT is very useful for quantization. It effectively removes the otherwise easily detectable blocking artifact between transform blocks. The used definition of MDCT is (a slight modification from [5]) is:

\begin{displaymath}
X(m) = \sum_{k=0}^{n-1}{f(k)x(k)\cos(\frac{\pi}{2n}(2k+1+\frac{n}{2})(2m+1))},
\ {\rm for}\ m = 0..\frac{n}{2}-1
\end{displaymath} (20)

and the IMDCT:
\begin{displaymath}
y(p) = f(p)\frac{4}{n}\sum_{m=0}^{\frac{n}{2}-1}{X(m)\cos(\frac{\pi}{2n}
(2p+1+\frac{n}{2})(2m+1))},\ {\rm for}\ p=0..n-1
\end{displaymath} (21)

where $f(x)$ is a window with certain properties (see [5]). The sine window
\begin{displaymath}
f(x) = \sin(\pi \frac{x}{n})
\end{displaymath} (22)

has the right properties, and is used in this coder. The MDCT in the coder is performed with a length of 512, and thus 256 new samples are used for every block.


Next  |  Prev  |  Up  |  Top  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

Download bosse.pdf

``An Experimental High Fidelity Perceptual Audio Coder'', by Bosse Lincoln<bosse@ccrma.stanford.edu>, (Final Project, Music 420, Winter '97-'98).
Copyright © 2006-01-03 by Bosse Lincoln<bosse@ccrma.stanford.edu>
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA  [Automatic-links disclaimer]