Difference between revisions of "Braun:320C"

From CCRMA Wiki
Jump to: navigation, search
Line 21: Line 21:
 
'''Weeks 1-3:'''
 
'''Weeks 1-3:'''
 
* I learned Faust.
 
* I learned Faust.
* I got [FaucK](https://ccrma.stanford.edu/~rmichon/fauck/) working on Windows. A pull request for the chugins repository is [here](https://github.com/ccrma/chugins/pull/49).
+
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].
* I made a basic IDE for Faust inside TouchDesigner: [TD-Faust](https://github.com/DBraun/TD-Faust/). The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [APIUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h) as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [MidiUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h). This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.
+
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.
  
 
'''Week 4:'''
 
'''Week 4:'''
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576](https://github.com/grame-cncm/faust/pull/576)
+
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]
* I looked at the source code of [Vital](https://github.com/mtytel/vital/), which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.
+
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.
* I browsed some repositories of Samplers such as [CTAG-JUCE-Sampler](https://github.com/NiklasWan/CTAG-JUCE-Sampler) and [JUCE_simple_sampler](https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271).
+
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].
* Ultimately I decided to clone JUCE's official [SamplerPluginDemo](https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h). I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.
+
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.
  
 +
'''Week 5:'''
 +
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].
 +
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.
 +
 +
'''Week 6:'''
 +
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.
 +
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.
 +
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.
  
 
== Anti-Alias techniques ==
 
== Anti-Alias techniques ==
 
https://forum.juce.com/t/antialiasing-a-synth/44527
 
https://forum.juce.com/t/antialiasing-a-synth/44527

Revision as of 15:19, 9 May 2021

Final Project Proposal:

I'm a fan of the Serum synthesizer, but I've run into issues using it with my DawDreamer. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.

The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.

The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.

To summarize, I plan on making a wavetable synthesizer with many features:

  • 2D Wavetable oscillators
  • The 2D wavetables can be set via Python or some C++ API.
  • Polyphony
  • Various filters and FX
  • API for modular routing (modular synthesis)
  • LFOs and Envelopes whose outputs are accessible

As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.

I need to look at the inner workings of surge-python because it might have a good example of setting modular routing via Python.

Weeks 1-3:

  • I learned Faust.
  • I got FaucK working on Windows. A pull request for the chugins repository is here.
  • I made a basic IDE for Faust inside TouchDesigner: TD-Faust. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same APIUI as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the MidiUI. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.

Week 4:

  • I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: https://github.com/grame-cncm/faust/pull/576
  • I looked at the source code of Vital, which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.
  • I browsed some repositories of Samplers such as CTAG-JUCE-Sampler and JUCE_simple_sampler.
  • Ultimately I decided to clone JUCE's official SamplerPluginDemo. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.

Week 5:

  • I made a public repo for Sampler.
  • I added Sampler as a submodule to DawDreamer. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.

Week 6:

  • I made a Chugin for the Sampler. The pull-request is here: https://github.com/ccrma/chugins/pull/50. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.
  • I added a Faust Processor to DawDreamer. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.
  • I studied the source code of Surge to understand how they do modular synthesis but abandoned it in favor of Vital. Some of the most interesting, important, and impressively written files in Vital might be processor_router.cpp and processor.cpp. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that synth_plugin.cpp calls processAudio, which gets an engine in synth_base.cpp to call process in processor_router.cpp.

Anti-Alias techniques

https://forum.juce.com/t/antialiasing-a-synth/44527