CCRMA Colloquium - The Phaser Filter: Its Properties and Musical Uses

Wed, 03/02/2011 - 5:15pm - 7:15pm
Event Type: 
Brian Clark and Danna Massie from Audience, Inc. will be our guests along with Max Mathews in the CCRMA Colloquium Series on March 2, 2011 (Wednesday), at 5:15pm. 


Two-pole "resonant" filters can be calculated on a digital computer in two different ways. The more usual biquad filter difference equation can be obtained from the physical model of a vibrating mass and spring. By contrast, the phaser filter model is a 2-dimensional rotating vector. The parameters of either filter are the frequency and damping (or Q) of the pole pair.

If the parameters are constant while the filter is processing a signal, then the two equations produce identical results.  But if the parameters are changing functions of time there is an important difference between the two filters. The biquad equation is parametrically unstable. If the frequency or damping of its pole pair changes while filtering a signal, signal amplitude can change erratically or even blow up. By contrast the phaser filter is parametrically unconditionally stable and changes in it's parameters will not produce a discontinuity in the signal. For this reason, phaser filters may be more useful in processing musical signals. 

We demonstrate the instabilities of the biquad filters and the stabilities of the phaser filters and show by a state-space analyses why they occur. The analyses also indicate that phaser filters are less sensitive than biquad filters to roundoff errors due to the finite word length of the computer.

For musical purposes, resonant filters are interesting only when banks containing many resonances are used together.  Today, computers are so fast that a modern laptop can compute more than 1000 phaser filters in real time.  We are only beginning to understand how to configure and tune the filter banks but already we have results we enjoy.

Resonances have always been important in music both in acoustic instruments and in concert halls. An intriguing possibility is using phaser filters to create a "virtual performance space" which can be dynamically retuned during a piece. 

A brief overview of hardware acceleration for phaser filters will also be given.


Brian Clark Bio
Brian Clark is a Senior DSP Software Engineer at Audience, Inc., a startup developing chips for audio enhancement on mobile phones based on models of the human auditory pathway.

His professional interests include DSP-based embedded systems design, and optimizing DSP algorithms through Hardware-Software co-design.

Prior to working at Audience, Brian worked as a Software Engineer at TASCAM, Inc., developing DSP algorithms for digital audio products.  Brian also worked developing sampling synthesizers as a Software/Systems Design Engineer for Emu Systems Inc.  As an Engineer at Alesis Inc., Brian developed hardware and software for dedicated audio DSP effects processors.

Brian graduated from Harvey Mudd College with a Bachelors Degree in Engineering.  In his spare time, Brian enjoys collecting and learning to play different musical instruments, composing and recording music, and playing Legos with his sons Ethan and Bradley.


Dana Massie Bio

Dana Massie is the director of DSP architecture at Audience, Inc., a startup developing chips for audio enhancement on mobile phones based on models of the human auditory pathway. His professional interests include DSP algorithm development & implementation including VLSI architectures for DSP.

Dana worked as manager of audio hardware at Apple Computer, DSP algorithm developer at Waves, Inc., DSP algorithm developer at NeXT Computer, working for Julius O. Smith.

Massie was Director of Creative Technology Advanced Technology Center, where he led the development of the EMU10K1, a multi-media coprocessor for PCs that accelerated Environmental Audio Extensions, the 3D audio API that Massie evangelized at Creative. Before that, Massie was at E-mu Systems, a computer music instrument company that pioneered the dedicated sampler as a musical instrument.


Max Mathews Bio

I was born in Nebraska in 1926. After serving in World War II as a radar technician in the navy, I studied electrical engineering at Cal Tech and MIT where I received a Sc.D degree in 1954. I worked in the Bell Telephone Laboratories from 1955 to 1987 and at Stanford University music department (CCRMA) from 1987 to 2005. I currently live in San Francisco and continue to do research at CCRMA. I studied violin performance until I finished high school and continue to enjoy playing as an amateur.

My job interests focused on applications of computers, at MIT analogue computers to intercept and destroy attacking missiles, and at Bell Labs digital computers to design speech coders.

In 1957, with the encouragement of my boss, John R Pierce, I started writing the programs Music 1 through Music 5 to synthesize music on a digital computer. These are open source programs. Music 5 (1967) was written in the compiler Fortran. Music 5 together with my book, The Technology of Computer Music, the MIT press 1969 made computer music synthesis widely accessible.

In 1968, F.R. Moore and I built "Groove" one of the first real-time performance synthesizers which involved a digital computer.

From 1987 to much of my time was spent making new controllers for real-time synthesis. My efforts together with Tom Oberheim and Bob Boie culminated in the Radio-Baton which tracks two batons in as they move in three-dimensional space thus allowing one performer to control a music performance with six independent variables.

In the last years I have been experimenting with "phaser filters" to synthesize more beautiful timbers. These filters add high Q resonances to music. The frequencies and Q's of these resonances can be dynamically changed during a performance.

Open to the Public
Syndicate content