Here are some more thoughts I’m having as I compose my piece. I am starting to explore different medium- to long-time-scale gestures – that is, elements of the piece that arise from the combination of many different processed sound samples.
In this post, I’ll discuss some decisions which have arose as I begin constructing my piece. As I have mentioned, the main thrust of the piece is to showcase the different effects that can be achieved with uniform quantization in the MDCT domain.
As I begin to craft a musical work featuring the spectral artifacts of MDCT-domain quantization, I decided to look at what the quantizer does to some very basic signals.
As part of Music 220C at Stanford’s CCRMA, I intend to explore the creative potential of frequency-domain quantization/dequantization. This follows on and was inspired by the work I did this past winter in Marina Bosi’s Perceptual Audio Coding course.
However, whereas the goal of that class was to teach students how to develop coders which achieve perceptual transparency at relatively low data rates, I approach the same or similar algorithms from the opposite perspective. I’m not concerned with data rate compression at all, and I’m interested in generating as much distortion as possible with an end goal of crafting an electroacoustic piece showcasing the various phenomena which arise.