As I begin to craft a musical work featuring the spectral artifacts of MDCT-domain quantization, I decided to look at what the quantizer does to some very basic signals. I started with a basic chirp created in Audacity. That is, I created a pure sine wave which starts from 20 Hz and ramps linearly to 20 kHz over 3 minutes at a constant amplitude of 0.7. I perceive no spectral artifacts (both from listening and from looking at the spectrograms) resulting from 8 bit quantization of MDCT values at window lengths equal to powers of 2 from 1024 to 131072.
Of course, it occurred to me that such slow variation of the pure sine wave would not yield noticeable distortion except at correspondingly huge window sizes. I then created a more condensed chirp, 30 seconds long. As you can see from the below plot, the faster variation in frequency produced some nice distortion. The windows, which are overlapped by 50%, smear the frequency components in time. As we’ll see, this is especially interesting in pre-echo. Moreover, this is a nice example of the reverb-type effect that is easy to produce with this setup. Finally, one can see that a lower bit depth tends to concentrate the frequency components highlighted by the effect.
In the above figure, the middle plot is slightly marred by some amplitude clipping in the output signal. Amplitude which is too big or too small, depending on the bit depth and window size, is something I will have to look at closely.
Next, I created some white noise. Thus the main thing to notice here is the frequency shaping. The below plot shows a very significant distortion in the spectral characteristics of the signal.
Below, I have also included a spectrogram of the audio example I played in class. Of note is the spectral flattening which occurs, which I will also investigate further.