Difference between revisions of "Mulshine:320C"

From CCRMA Wiki
Jump to: navigation, search
m (My Ever-Evolving 320C Project Idea)
m (Week 7)
 
(One intermediate revision by the same user not shown)
Line 46: Line 46:
  
 
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library.  
 
# '''A basic EQ "horizon"''' in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library.  
 
 
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect.  
 
# '''Dots/"clouds"''' that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect.  
  
Line 57: Line 56:
  
 
# '''Autotune'''
 
# '''Autotune'''
 
 
# '''Harmonizer'''
 
# '''Harmonizer'''
 
 
# '''Formant Shifter'''
 
# '''Formant Shifter'''

Latest revision as of 11:12, 17 May 2021

Music 320C: Software Projects in Music/Audio Signal Processing

My Ever-Evolving 320C Project Idea

I want to make a quirky, visually elaborate, "all-in-one," "one-stop" Vocal FX plugin for singer-songwriters, rappers, and other vocalists.


I want the UI to feature many bright colors, shapes, aesthetically/socially meaningful artifacts, and require unconventional (but fun and intuitive) means of interaction to control the Vocal FX chain. Ideally there will be no numbers, sliders, knobs, meters, or other traditional plugin UI elements in my plugin. This is inspired by my interests in developing unique new audiovisual interactive interfaces for music consumption/distribution, being a singer-songwriter myself, and needing to often work on vocal production. Check out Hairdresser and MP5 for some modal and aesthetic reference.


Week 1

In week 1, I thought a lot about what I wanted to make for this course. I was enrolled CS 448Z: Physically-Based Animation and Sound with Doug James and imagined that I might make a JUCE Plugin that visualized a few interesting physical models of sound and graphics. This felt like a great idea, but I soon learned that my physics and diff eq skills were a bit lacking. My lack of physics fundamentals and already packed quarter schedule compelled me to drop 448Z (sorry Professor James!) and redouble my efforts on 320C.


I still desired to make a JUCE plugin that combined audio and visuals. I often make web-based audiovisual interactive media that straddles the line between passive listening and active gaming, (see Hairdresser and MP5). I also focus a lot of on making my vocals sound good (read, hopefully decent) when I write/produce songs (see Sunny Day or Hairdresser, among other tunes). Because of my interest in vocal production, I developed a custom Vocal EQ and a few spectral delay effects in Python for my final project in 320B last quarter. Combining all of these efforts and aesthetic leanings, I decided it would be fun to make an "all-in-one" vocal effects plugin with a highly abstract, aesthetically poignant, graphical UI for 320C this quarter. In this plugin, the user will navigate and modify a virtual 2D or combined 2D/3D world to apply effects to their vocals (or I suppose other instruments, too). The dream is that changing colors, shapes, and interesting interactions define the changing state of the FX chain. Traditional numbers, sliders, dials, and meters are not part of this world. I could imagine the contour of a horizon roughly representing the shape of an EQ, the amount and rate of bubbles floating through a sky corresponding to the sound of a variable delay with feedback, colors shifting formants, and more.

Week 2

I got reacquainted with the JUCE plugin development toolchain: updated JUCE, downloaded the new Projucer, built some plugins in XCode, reminded myself how to debug plugins in my DAW via XCode, and more.


I was recently interested in developing a de-esser plugin. I did some research on how de-essers work. This article laid out how a de-esser might function in a very intuitive way, explaining the algorithm in terms of traditional plugins and fx chains. This is the algorithm they outline:

  1. Apply parametric EQ (peak filter) to input signal with frequency between 5-8 kHz and a very high Q (like 100+). This will produce a signal that peaks really aggressively on sibilant "s" sounds, since "s" frequency content is packed in to the 5-8kHz range.
  2. Compress original input signal with sibilant filtered signal as sidechain input. Because the sidechain input peaks aggressively on "s" sounds, the compressor will compress the original input only during those "s" sounds. The envelope of the compressor needs to be really quick to cut out "s" sounds as soon as they happen, and before they are perceptible. In experiments, I had to set the compressor's attack to approximately 0.15ms and release to 15-20ms to get smooth, immediately responsive compression.


I'm not trying to build a whole plugin UI completely from scratch at this stage. Looking around for code to work from, I was lucky enough to find an open source dynamic range compressor JUCE plugin by Creative Technologies. I ported a state-variable filter implementation (based on STK) from a previous project (LEAF into this plugin and was able to get the de-esser algorithm above to work, along with a couple extra knobs and buttons in the UI.


This side project in to de-essing will be useful in developing a collection of vocal FX for my 320C final project plugin.

Week 3 + 4

I spent a lot of time developing and fine-tuning my DeEsser plugin.

I also spent some time implementing an autotune effect using the LEAF library's tRetune class. I implemented a tuning system where root pitch and scale.

I also started working through some JUCE animation tutorials.

Week 5 + 6

I decided to roll with the analogy of a landscape for my vocal FX plugin.

So far I've implemented a couple features of this vocal FX landscape that I am (for now) calling "voxworld":

  1. A basic EQ "horizon" in which a landscape's horizon can be drawn and redrawn to set the peaks of a constant Q equalizer with 20 bands (so far). I exported the dsp source for mth_octave_filterbank_demo(2) to C++ and ported the code in to a JUCE plugin setup to interface with the LEAF library.
  2. Dots/"clouds" that appear, fade away, then reappear (continuously) when you click in the "sky." The more clouds, the higher the feedback gain in a delay line ("echo") effect.

Next I need to think about ways to remove the clouds and integrate a variety of other effects. I want to incorporate autotune, choruses, harmonizers, basic reverbs and delays, formant shifters, etc. It will be a challenge to design intuitive ways of toggling on/off these effects as well as changing their relative mixes and other internal parameters without the use of traditional UI elements (sliders, knobs, numbers). I will do my best to deliver fun, easy-to-use/understand, visually compelling, and powerful control paradigms in the UI, but will likely also provide a super-editor mode that allows users to fine-tune certain effect details.

I have already noticed some performance issues when I add lots of dots. Currently I'm not using the GPU to draw, so my next step in this journey is to understand how to use Open GL in a JUCE Plugin and learn how to program some shaders... fun fun!

Week 7

I have been working on a variety of vocal effects using LEAF. These include:

  1. Autotune
  2. Harmonizer
  3. Formant Shifter