Final Presentations Spring 2021

Organ Waveguide Synthesizer: A pipe dream to pull out all the stops

Champ Darabundit

The pipe organ presents an interesting problem to acoustic modeling as each pipe requires a unique digital waveguide model. A single key can sound a multitude of different pipes, making it necessary to have a highly efficient digital model. The Faust programming language is well suited for this task and I will present a Faust-driven digital acoustic model of a pipe organ.

Physical Model Based Harmonizer

Andrea Baldioceda

A C++ plugin/app made in JUCE that utilizes physical modeling of the vocal tract to create harmonies for an input voice signal. The physical model code is based on the Pink Trombone speech synthesis program by Neil Thapen.


Mike Mulshine

"VOXWORLD" is an accessible, aesthetically-motivated, all-in-one vocal effects plugin. It features a quirky UI that confidently leans away from conventional plugin interfaces by eschewing the use of sliders, buttons, textboxes, and other typical editing media. VOXWORLD supplies oft in-demand vocal effects like vocal-tailored EQ, delay lines, reverb, autotune, formant shifting, chorus fx, harmonizers and more, editable via an interface that feels more like a virtual landscape than a customizable toolkit.

Integrating JUCE, Faust, ChucK, Python, TouchDesigner

David Braun

I’ll summarize my projects which integrate JUCE, Faust, ChucK, Python, and TouchDesigner. That’s 10 (4+3+2+1) combinations, and I’ll cover 7 of them. I’ll emphasize my Python framework which sets up Faust for deep learning frameworks. In future projects it could be used for intelligent music production, mastering, reverb matching, and more.


Joss Saltzman

An audio effects workflow using Python and JUCE+Faust to enact a customizable mapping of RGB-space color in visual media to automated transformations of scores and/or sound.

Real-Time deep learning based distortion emulation using a WaveNet-Style network

Esteban Gomez

Distortion is oftentimes used as an effect that modifies the timbre of a musical instrument recording by applying a nonlinear function to enrich its spectral content. The nonlinearities found in different technological inventions such as diodes, vacuum tubes or transistors, motivated several different distortion types. Nowadays and because of its significant capabilities for storing and processing signals in real-time, the digital domain predominates in music creation and therefore, many effects rooted on analog signal processing have been modeled to be applied in this domain. Nevertheless, many musicians still prefer the timbral quality of analog effects compared to their digital counterparts and hence, faithfully modeling these in the digital domain became a research topic. For this reason, deep learning methods' capability to model non-linear functions make them a suitable candidate for this task. This project explores the capabilities of a modified version of WaveNet - a previously proposed architecture to generate raw audio - when applied to distortion emulation.