MUSIC 424/EE: Signal Processing Techniques for Digital Audio Effects -- Digital signal processing methods for audio effects used in music mixing and mastering. Topics: dynamic range compression, reverberation and room impulse response measurement, equalization and filtering, panning and spatialization; digital emulation of analog processors and implementation of time varying effects. Single-band and multiband compressors, limiters, noise gates, de-essers, convolutional and feedback delay network reverberators, parametric and linear-phase equalizers, wah-wah and envelope-following filters, flanging and phasing, distortion. Students develop effects algorithms of their own design. Prerequisites: MUSIC 320, EE 102B or equivalent; some familiarity with Matlab and C. 3-4 units, Spr (Abel, Berners)
An exposure to digital signal processing, including familiarity with the sampling theorem, digital filtering and the Fourier Transform at the level of Music 320 or EE 102B is required. An understanding of digital signal processing at the level provided by Music 420 or EE 264 is helpful. Familiarity with the use of audio effects in mixing and mastering, such as presented in Music 192 is also of benefit. Only a modest amount of Matlab or C programming experience is required for the homework and laboratory exercises.
David P. Berners is Chief Scientist of Universal Audio, Inc., a GRAMMY Award-winning hardware and software manufacturer for the professional audio market. At UA, Dr. Berners leads research and development efforts in audio effects processing, including dynamic range compression, equalization, distortion and delay effects, and specializing in modeling of vintage analog equipment. He is also a consulting professor at CCRMA at Stanford University, where he teaches a graduate class in audio effects processing. Dr. Berners was previously with Aureal Semiconductor where he developed pitch shifting, harmonizing and other audio signal processing algorithms, and has held positions at the Lawrence Berkeley Laboratory, NASA Jet Propulsion Laboratory and Allied Signal. He received his Ph.D. from Stanford University, M.S. from Caltech, and his S.B. from MIT, all in electrical engineering.
Jonathan S. Abel is a consulting professor at the Center for Computer Research in Music and Acoustics (CCRMA) in the Music Department at Stanford University where his research interests include audio and music applications of signal and array processing, parameter estimation, and acoustics. From 1999 to 2007, Abel was a co-founder and chief technology officer of the GRAMMY Award-winning Universal Audio, Inc., Inc. He was a researcher at NASA/Ames Research Center, exploring topics in room acoustics and spatial hearing on a grant through the San Jose State University Foundation. Abel was also chief scientist of Crystal River Engineering, Inc., where he developed their positional audio technology, and a lecturer in the Department of Electrical Engineering at Yale University. As an industry consultant, Abel has worked with Apple, FDNY, LSI Logic, NRL, SAIC, Native Instruments and Sennheiser, on projects in professional audio, GPS, medical imaging, passive sonar and fire department resource allocation. He holds Ph.D. and M.S. degrees from Stanford University, and an S.B. from MIT, all in electrical engineering. Abel is a Fellow of the Audio Engineering Society.