From CCRMA Wiki
Revision as of 21:30, 29 May 2021 by Braun (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Final Project Proposal:

I'm a fan of the Serum synthesizer, but I've run into issues using it with my DawDreamer. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.

The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.

The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.

To summarize, I plan on making a wavetable synthesizer with many features:

  • 2D Wavetable oscillators
  • The 2D wavetables can be set via Python or some C++ API.
  • Polyphony
  • Various filters and FX
  • API for modular routing (modular synthesis)
  • LFOs and Envelopes whose outputs are accessible

As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.

I need to look at the inner workings of surge-python because it might have a good example of setting modular routing via Python.

Weeks 1-3:

  • I learned Faust.
  • I got FaucK working on Windows. A pull request for the chugins repository is here.
  • I made a basic IDE for Faust inside TouchDesigner: TD-Faust. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same APIUI as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the MidiUI. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.

Week 4:

  • I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged:
  • I looked at the source code of Vital, which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.
  • I browsed some repositories of Samplers such as CTAG-JUCE-Sampler and JUCE_simple_sampler.
  • Ultimately I decided to clone JUCE's official SamplerPluginDemo. I encountered an issue with not hearing audio ( but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.

Week 5:

  • I made a public repo for Sampler.
  • I added Sampler as a submodule to DawDreamer. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.

Week 6:

  • I made a Chugin for the Sampler. The pull-request is here: For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.
  • I added a Faust Processor to DawDreamer. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.
  • I studied the source code of Surge to understand how they do modular synthesis but abandoned it in favor of Vital. Some of the most interesting, important, and impressively written files in Vital might be processor_router.cpp and processor.cpp. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that synth_plugin.cpp calls processAudio, which gets an engine in synth_base.cpp to call process in processor_router.cpp.

Week 7:

  • I learned how to use Rubberband for time-stretching and pitch-stretching. I made a WarpBuf Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.

Week 8:

  • I published a repo for parsing Ableton warp files: AbletonParsing.
  • I added Rubberband to a branch of DawDreamer. It's working well. I can control the ordinary clip settings such as loop start, loop end, loop on/off, start marker, end marker. I can also give the clip a "clip start" parameter in beats relative to the entire audio render. Soon I'll add a "clip end" parameter.

Week 9:

  • I worked on adding a function to add multiple instances of a clip in the Rubberband branch of DawDreamer. It's not quite working yet.

Week 10:

  • Preparing presentation.

New abstract:

Integrating JUCE, Faust, ChucK, Python, TouchDesigner

I'll summarize my projects which integrate JUCE, Faust, ChucK, Python, and TouchDesigner. That's 10 (4+3+2+1) combinations, and I'll cover 7 of them. I'll emphasize my Python framework which sets up Faust for deep learning frameworks. In future projects it could be used for intelligent music production, mastering, reverb matching, and more.

Anti-Alias techniques

Modular synthesis