https://ccrma.stanford.edu/mediawiki/api.php?action=feedcontributions&user=Braun&feedformat=atomCCRMA Wiki - User contributions [en]2024-03-29T00:01:02ZUser contributionsMediaWiki 1.24.1https://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23147Braun:320C2021-05-30T04:30:42Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
'''Week 8:'''<br />
* I published a repo for parsing Ableton warp files: [https://github.com/DBraun/AbletonParsing AbletonParsing].<br />
* I added Rubberband to a branch of DawDreamer. It's working well. I can control the ordinary clip settings such as loop start, loop end, loop on/off, start marker, end marker. I can also give the clip a "clip start" parameter in beats relative to the entire audio render. Soon I'll add a "clip end" parameter.<br />
<br />
'''Week 9:'''<br />
* I worked on adding a function to add multiple instances of a clip in the Rubberband branch of DawDreamer. It's not quite working yet.<br />
<br />
'''Week 10:'''<br />
* Preparing presentation.<br />
<br />
New abstract:<br />
== Integrating JUCE, Faust, ChucK, Python, TouchDesigner ==<br />
<br />
I'll summarize my projects which integrate JUCE, Faust, ChucK, Python, and TouchDesigner. That's 10 (4+3+2+1) combinations, and I'll cover 7 of them. I'll emphasize my Python framework which sets up Faust for deep learning frameworks. In future projects it could be used for intelligent music production, mastering, reverb matching, and more.<br />
<br />
== Anti-Alias techniques ==<br />
* https://forum.juce.com/t/antialiasing-a-synth/44527<br />
<br />
== Modular synthesis ==<br />
* https://en.wikipedia.org/wiki/Reactive_programming#Cyclic_dependencies</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23136Braun:320C2021-05-24T06:40:57Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
'''Week 8:'''<br />
* I published a repo for parsing Ableton warp files: [https://github.com/DBraun/AbletonParsing AbletonParsing].<br />
* I added Rubberband to a branch of DawDreamer. It's working well. I can control the ordinary clip settings such as loop start, loop end, loop on/off, start marker, end marker. I can also give the clip a "clip start" parameter in beats relative to the entire audio render. Soon I'll add a "clip end" parameter.<br />
<br />
== Anti-Alias techniques ==<br />
* https://forum.juce.com/t/antialiasing-a-synth/44527<br />
<br />
== Modular synthesis ==<br />
* https://en.wikipedia.org/wiki/Reactive_programming#Cyclic_dependencies</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23135Braun:320C2021-05-24T05:21:56Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
'''Week 8:'''<br />
* I published a repo for parsing Ableton warp files: [https://github.com/DBraun/AbletonParsing AbletonParsing].<br />
* I added Rubberband to a branch of DawDreamer. It's working well. I can control the ordinary clip settings such as loop start, loop end, loop on/off, start marker, end marker. I can also give the clip a "clip start" parameter in beats relative to the entire audio render. Soon I'll add a "clip end" parameter.<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23134Braun:320C2021-05-19T23:14:49Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
'''Week 8:'''<br />
* I published a repo for parsing Ableton warp files: [https://github.com/DBraun/AbletonParsing AbletonParsing].<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23129Braun:320C2021-05-18T22:40:35Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
'''Week 8:'''<br />
* I added Ableton warp file parsing to [https://github.com/bmcfee/pyrubberband/pull/26 pyrubberband].<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23128Braun:320C2021-05-18T17:15:00Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
* I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23127Braun:320C2021-05-18T17:14:10Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
'''Week 7:'''<br />
I learned how to use [https://github.com/breakfastquay/rubberband/ Rubberband] for time-stretching and pitch-stretching. I made a [https://github.com/DBraun/chugins/tree/feature/Rubberband/WarpBuf WarpBuf] Chugin that uses Rubberband for these features. It also parses Ableton asd files that contain warp markers. I'd like to add Rubberband and this Ableton warp marker parsing feature to DawDreamer. This would allow me to use Python to align various tracks of different tempos. This would be great for generating datasets for music transcription and source separation.<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23098Braun:320C2021-05-09T22:19:02Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [https://ccrma.stanford.edu/~rmichon/fauck/ FaucK] working on Windows. A pull request for the chugins repository is [https://github.com/ccrma/chugins/pull/49 here].<br />
* I made a basic IDE for Faust inside TouchDesigner: [https://github.com/DBraun/TD-Faust/ TD-Faust]. The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h APIUI] as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h MidiUI]. This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576 https://github.com/grame-cncm/faust/pull/576]<br />
* I looked at the source code of [https://github.com/mtytel/vital/ Vital], which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [https://github.com/NiklasWan/CTAG-JUCE-Sampler CTAG-JUCE-Sampler] and [https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271 JUCE_simple_sampler].<br />
* Ultimately I decided to clone JUCE's official [https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h SamplerPluginDemo]. I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
'''Week 5:'''<br />
* I made a public repo for [https://github.com/DBraun/Sampler Sampler].<br />
* I added Sampler as a submodule to [https://github.com/DBraun/DawDreamer/ DawDreamer]. Now I use python to load a sample, play it with MIDI data, adjust ADSR parameters, and render to wavfile.<br />
<br />
'''Week 6:'''<br />
* I made a Chugin for the Sampler. The pull-request is here: [https://github.com/ccrma/chugins/pull/50 https://github.com/ccrma/chugins/pull/50]. For people familiar with ChucK, it's like a better SndBuf that supports polyphony and has built in ADSR for volume and filter cutoff.<br />
* I added a Faust Processor to [https://github.com/DBraun/DawDreamer/ DawDreamer]. This is a great way to EQs and multiband sidechain compression in DawDreamer. This makes it a much better tool for researching automatic music mastering.<br />
* I studied the source code of [https://github.com/surge-synthesizer/surge/ Surge] to understand how they do modular synthesis but abandoned it in favor of [https://github.com/mtytel/vital/ Vital]. Some of the most interesting, important, and impressively written files in Vital might be [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L179-L191 processor_router.cpp] and [https://github.com/mtytel/vital/blob/main/src/synthesis/framework/processor.cpp processor.cpp]. For processor_router.cpp, I've linked to the section that figures out if a requested modular connection will lead to a feedback loop, and if so, inserts a "feedback" node. Look at how the "reorder" method plays a role in adding/remove modular routings. Another high-level overview is that [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/plugin/synth_plugin.cpp#L170 synth_plugin.cpp] calls processAudio, which gets an engine in [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/common/synth_base.cpp#L584 synth_base.cpp] to call [https://github.com/mtytel/vital/blob/c0694a193777fc97853a598f86378bea625a6d81/src/synthesis/framework/processor_router.cpp#L66-L92 process] in processor_router.cpp.<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23005Braun:320C2021-04-25T19:04:43Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [FaucK](https://ccrma.stanford.edu/~rmichon/fauck/) working on Windows. A pull request for the chugins repository is [here](https://github.com/ccrma/chugins/pull/49).<br />
* I made a basic IDE for Faust inside TouchDesigner: [TD-Faust](https://github.com/DBraun/TD-Faust/). The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [APIUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h) as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [MidiUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h). This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
'''Week 4:'''<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576](https://github.com/grame-cncm/faust/pull/576)<br />
* I looked at the source code of [Vital](https://github.com/mtytel/vital/), which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [CTAG-JUCE-Sampler](https://github.com/NiklasWan/CTAG-JUCE-Sampler) and [JUCE_simple_sampler](https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271).<br />
* Ultimately I decided to clone JUCE's official [SamplerPluginDemo](https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h). I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.<br />
<br />
<br />
== Anti-Alias techniques ==<br />
https://forum.juce.com/t/antialiasing-a-synth/44527</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=23004Braun:320C2021-04-25T17:49:23Z<p>Braun: </p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.<br />
<br />
'''Weeks 1-3:'''<br />
* I learned Faust.<br />
* I got [FaucK](https://ccrma.stanford.edu/~rmichon/fauck/) working on Windows. A pull request for the chugins repository is [here](https://github.com/ccrma/chugins/pull/49).<br />
* I made a basic IDE for Faust inside TouchDesigner: [TD-Faust](https://github.com/DBraun/TD-Faust/). The basic workflow is the same as FaucK. An additional cool feature is that it can generate a UI of TouchDesigner widgets based on the Faust code you write. This uses the same [APIUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/APIUI.h) as FaucK. An alternative workflow uses the polyphonic DSP factory classes and the [MidiUI](https://github.com/grame-cncm/faust/blob/master-dev/architecture/faust/gui/MidiUI.h). This doesn't have the same UI generator feature, but at least it's polyphonic with hardware MIDI.<br />
<br />
Week 4:<br />
* I got kind of distracted and put a lot of time into fixing faust2juce for Windows. Here's the PR which got merged: [https://github.com/grame-cncm/faust/pull/576](https://github.com/grame-cncm/faust/pull/576)<br />
* I looked at the source code of [Vital](https://github.com/mtytel/vital/), which does spectral deformations of wavetables. I'm very interested in understanding how its modular aspects work, for example, how an ADSR can be routed to affect a filter cutoff.<br />
* I browsed some repositories of Samplers such as [CTAG-JUCE-Sampler](https://github.com/NiklasWan/CTAG-JUCE-Sampler) and [JUCE_simple_sampler](https://github.com/vincentchoqueuse/JUCE_simple_sampler/blob/master/Source/CustomSampler.cpp#L271).<br />
* Ultimately I decided to clone JUCE's official [SamplerPluginDemo](https://github.com/juce-framework/JUCE/blob/master/examples/Plugins/SamplerPluginDemo.h). I encountered an issue with not hearing audio (https://github.com/juce-framework/JUCE/issues/893) but partially resolved it. This project plays back a sample at different speeds based on the MIDI note. It uses MPE (Midi Polyphonic Expression). The samples are linearly interpolated for different pitches. There are no amplitude envelopes or filters. Now I'm studying the code and seeing if I can add more modular features, like routing an ADSR to a filter cutoff.</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Braun:320C&diff=22873Braun:320C2021-04-13T03:57:35Z<p>Braun: first edit</p>
<hr />
<div>Final Project Proposal:<br />
<br />
I'm a fan of the [https://xferrecords.com/products/serum Serum] synthesizer, but I've run into issues using it with my [https://github.com/DBraun/DawDreamer/ DawDreamer]. For example, with Serum and DawDreamer, I can load presets and change various parameters such as the amount of modulation from Envelope 1 to Oscillator 2's panning, but I can't change the routing itself. The routing is baked into the preset. In other words, the modulation matrix can't change, only the amount of modulation for each entry in the table. I want some kind of API in which I can decide the routing.<br />
<br />
The second major issue is that I can't load a Wavetable from Python. The wavetable is also baked into the preset in some non-modifiable way.<br />
<br />
The third issue is not related to sound design but instead to using the sound signals for visual design. I'd like to have access to all of the intermediate signals in a modular synthesis setup. So if there are envelopes and LFOs, I'd want an audio-rate stream of all of them in addition to the final stereo signal. I intend to use these signals some time later for real-time audio-reactive visual design.<br />
<br />
To summarize, I plan on making a wavetable synthesizer with many features:<br />
* 2D Wavetable oscillators<br />
* The 2D wavetables can be set via Python or some C++ API.<br />
* Polyphony<br />
* Various filters and FX<br />
* API for modular routing (modular synthesis)<br />
* LFOs and Envelopes whose outputs are accessible<br />
<br />
As a spinoff project, once I figure out some of the wavetable stuff, it would also be great to have a basic "sampler" instrument such as the one in Ableton Live or Kontakt. I want to be able to provide samples via Python or C++. I'd then want to have some of the basic Sampler features such as ADSR envelopes to control volume, panning, filter cutoffs etc.<br />
<br />
I need to look at the inner workings of [https://github.com/surge-synthesizer/surge-python/ surge-python] because it might have a good example of setting modular routing via Python.</div>Braunhttps://ccrma.stanford.edu/mediawiki/index.php?title=Colloquium&diff=22539Colloquium2020-09-16T16:26:24Z<p>Braun: /* Autumn Quarter (2021) */</p>
<hr />
<div>@5:30pm in the Classroom on Wednesdays!<br />
<br />
The CCRMA Colloquium is a weekly gathering of CCRMA students, faculty, staff, and guests. It is an opportunity for members of the CCRMA community and invited speakers to share the work that they are doing in the fields of Computer Music, Audio Signal Processing and Music Information Retrieval, Psychoacoustics, and related fields. The colloquium typically happens every Wednesday during the school year from 5:30 - 7:00pm and meets in the CCRMA Classroom, Knoll 217, unless otherwise noted. <br />
<br />
The colloquium team for 2019-2020 is:<br /><br />
Camille Noufi - cnoufi@ccrma.stanford.edu <br /><br />
Barbara Nerness - bnerness@ccrma.stanford.edu <br /><br />
Kunwoo Kim - kunwoo@ccrma.stanford.edu <br /><br />
Mike Mulshine - mrmulshine@ccrma.stanford.edu <br /><br />
<br /><br />
<br />
*Note: the colloquium will not be held every Wednesday this year (20-21), please keep an eye on the notification e-mails for the dates.<br />
<br />
<br />
= Autumn Quarter (2021)=<br />
<span style="color:red">'''In person colloquiua will not be held for the 2020 Autumn Quarter. All events will be held remotely.<br />
<br />
9/16 New Student Introductions<br />
** Speaker 1: Lloyd May<br />
** Speaker 2: Andrew Zhu<br />
** Speaker 3: Kathleen Yuan<br />
** Speaker 4: Marise van Zyl<br />
** Speaker 5: Hannah Choi<br />
** Speaker 6: Joss Saltzman<br />
** Speaker 7: Champ Darabundit<br />
** Speaker 8: Clara Allison<br />
** Speaker 9: David Braun<br />
** Speaker 10:<br />
** Speaker 11:<br />
** Speaker 12:<br />
** Speaker 13:<br />
** Speaker 14:<br />
** Speaker 15:<br />
<br />
9/23 Faculty/Staff Introductions<br />
**Speaker 1: Jonathan B. (flexible - can be later or next week - but prefer on early side)<br />
** Speaker 2: Ge Wang (also flex)<br />
** Speaker 3: <br />
** Speaker 4:<br />
** Speaker 5: Eleanor Selfridge-Field<br />
** Speaker 6: Craig Stuart Sapp<br />
** Speaker 7:<br />
** Speaker 8:<br />
** Speaker 9:<br />
** Speaker 10:<br />
** Speaker 11:<br />
** Speaker 12:<br />
** Speaker 13:<br />
** Speaker 14:<br />
** Speaker 15:<br />
<br />
9/30 Faculty/Staff Introductions<br />
**Speaker 1:<br />
** Speaker 2:<br />
** Speaker 3: Marina Bosi<br />
** Speaker 4: Nando (aka Fernando Lopez-Lezcano)<br />
** Speaker 5:<br />
** Speaker 6:<br />
** Speaker 7:<br />
** Speaker 8:<br />
** Speaker 9:<br />
** Speaker 10:<br />
** Speaker 11:<br />
** Speaker 12:<br />
** Speaker 13:<br />
** Speaker 14:<br />
** Speaker 15:<br />
<br />
= Winter Quarter (2021)=<br />
<br />
* '''1/13: <br />
* '''1/20:<br />
* '''1/27:<br />
* '''2/03:<br />
* '''2/10:<br />
* '''2/17:<br />
* '''2/24:<br />
* '''3/03:<br />
* '''3/10:<br />
* '''3/17:<br />
<br />
= Fall Quarter (2019)=<br />
<br />
* '''9/25: New Student Presentations''' (Week 1)<br />
** Speaker 1: Jeremy Raven<br />
** Speaker 2: Brendan Larkin<br />
** Speaker 3: Raul Altosaar<br />
** Speaker 4: Jan Stoltenberg<br />
** Speaker 5: Vivian Chen<br />
** Speaker 6: Ty Sadlier<br />
** Speaker 7: Kunwoo Kim<br />
** Speaker 8: Andrea Baldioceda<br />
** Speaker 9: Varsha Sankar<br />
** Speaker 10: Mike Mulshine<br />
<br />
<br />
* '''10/2: Faculty Introductions''' (Week 2)<br />
** Speaker 1: Patricia Alessandrini <br />
** Speaker 2: Eleanor Selfridge Field<br />
** Speaker 3: Craig Stuart Sapp<br />
** Speaker 4: JRB<br />
** Speaker 5: Takako<br />
** Speaker 6: Ge <br />
** Speaker 7: Jarek <br />
** Speaker 8: Blair Kaneshiro <br />
** Speaker 9: Matt Wright<br />
** Speaker 10: Fernando Lopez-Lezcano<br />
** Speaker 11: Anne Hege <br />
** Speaker 12: Julius Smith<br />
** Speaker 13: Elena Georgieva<br />
** Speaker 14: Marina Bosi <br />
** Speaker 15: Hongchan Choi<br />
<br />
<br />
* '''10/9: ''' (Week 3): YOM KIPPUR - no colloquium<br />
<br />
<br />
* '''10/16: Rapid-Fire Talks''' (Week 4)<br />
** Speaker 1: Jack<br />
** Speaker 2: Jason<br />
** Speaker 3: Ge<br />
** Speaker 4: Noah<br />
** Speaker 5: Elliot<br />
** Speaker 6: Barbara<br />
** Speaker 7: Orchi<br />
** Speaker 8: Matt (the "after" of my Modulations instrument, hopefully this time with MIDI working)<br />
** Speaker 9: CCRMA composting<br />
** Speaker 10: Jatin<br />
** Speaker 11: Mark<br />
** Speaker 12: Elena<br />
** Speaker 13: Carlos<br />
<br />
<br />
* '''10/23: [http://www.arj.no/ Alexander Jensenius]''' (Week 5) <br />
<br />
* '''10/30: No Colloquium''' (Week 6) <br />
<br />
* '''11/6: [http://www.annehege.com/ Anne Hege]''' (Week 7)<br />
<br />
* '''11/13: CCRMA Town Hall''' (Week 8)<br />
<br />
* '''11/20:[https://www.donlewismusic.com/ Don Lewis]''' (Week 9)<br />
<br />
* '''Thanksgiving week''' <br />
<br />
* '''12/4: [https://ccrma.stanford.edu/groups/vr/ VR Lab Day]''' (Week 11)</div>Braun