Difference between revisions of "Strange Convo"

From CCRMA Wiki
Jump to: navigation, search
(Introduction)
(Phase 1: Researching)
Line 7: Line 7:
  
 
== Phase 1: Researching ==
 
== Phase 1: Researching ==
 +
 +
I spent weeks 1-3 of the quarter researching potential project paths. I brainstormed several ideas, including an interactive synthesizer tutorial website made with WebAudio, an audio effect plugin that emulates the sound of a laptop mic run through Skype (a modulation, filter, and distortion multieffect), and some sort of unique application of convolution. With each of these potential routes, I looked into the surrounding literature and libraries online. For the WebAudio project, I spent several days experimenting with simple WebAudio features until I found a resource called Tone.js that makes synth and effects creation significantly easier/faster than just WebAudio itself. With my concurrent research into convolution techniques, I realized that I could easily record an impulse played back through Skype (and/or record using a laptop mic) to get a static snapshot of filtering artifacts (but not time-based irregularities). Capturing an IR through Skype could help me further uncover the artifacts' timbres and hopefully emulate the sound through other means.
 +
 +
In my further research of capturing impulse responses and convolution reverb techniques, I found several websites that recommended using non-impulse response sources in convolution reverbs for creative results. Most convolution reverbs utilize a file browser or a drag-and-drop paradigm for loading new impulses. It occurred to me that if a convolution effect did not prioritize using standard impulse responses, the method of loading or capturing the "impulse" could be completely different. It was at this point that I knew I would create a convolution effect with an interface built around sampling. Furthermore, the idea of convolving two real-time audio signals seemed like it would produce incredibly unusual results. I sought to research different applications of audio convolution that could help me with creating my effect.
 +
 +
I decided early on that I wanted to use Max4Live to build this effect. Luckily, I found a set of Max externals developed for a variety of convolution related tasks distributed under a modified BSD license: [http://eprints.hud.ac.uk/id/eprint/14897/ the HISSTools Impulse Response Toolbox.] After inspecting all of their externals in Max, I found that the multiconvolve~ object was most relevant to my design. This object combines "time domain convolution for the early portion of an IR with more efficient FFT-based partitioned convolution for the latter parts of the IR." Because of the multiconvolve~ object's design, it cannot convolve two real-time signals and needs the "IR" to be loaded into a Max buffer~ object. With this in mind, I began focusing exclusively on designing the sampling paradigm.
  
 
== Phase 2: Designing and Prototyping ==
 
== Phase 2: Designing and Prototyping ==

Revision as of 10:01, 18 December 2018

Introduction

Strange Convo is a Max4Live audio effect that uses pseudo-realtime convolution with a sampling paradigm to dynamically filter an audio signal. It allows an Ableton Live user to sample stereo audio into the device’s buffer which is then convolved with incoming audio. The convolved audio sounds similar to a resonator, as the common frequencies between the sample and the real-time input are amplified and the uncommon frequencies are attenuated. Unlike a resonator, however, the resonant frequencies change with the buffer sample’s timbral qualities through time. Strange Convo is particularly good at smearing two harmonic sources into ambient washes, or applying tonal/chordal qualities to percussive sounds. By encouraging experimentation with different sample sources, Strange Convo pushes users towards creative new applications of convolution cross-synthesis.

Strange Convo viddoc on Vimeo

Strange Convo M4L Patch DL

Phase 1: Researching

I spent weeks 1-3 of the quarter researching potential project paths. I brainstormed several ideas, including an interactive synthesizer tutorial website made with WebAudio, an audio effect plugin that emulates the sound of a laptop mic run through Skype (a modulation, filter, and distortion multieffect), and some sort of unique application of convolution. With each of these potential routes, I looked into the surrounding literature and libraries online. For the WebAudio project, I spent several days experimenting with simple WebAudio features until I found a resource called Tone.js that makes synth and effects creation significantly easier/faster than just WebAudio itself. With my concurrent research into convolution techniques, I realized that I could easily record an impulse played back through Skype (and/or record using a laptop mic) to get a static snapshot of filtering artifacts (but not time-based irregularities). Capturing an IR through Skype could help me further uncover the artifacts' timbres and hopefully emulate the sound through other means.

In my further research of capturing impulse responses and convolution reverb techniques, I found several websites that recommended using non-impulse response sources in convolution reverbs for creative results. Most convolution reverbs utilize a file browser or a drag-and-drop paradigm for loading new impulses. It occurred to me that if a convolution effect did not prioritize using standard impulse responses, the method of loading or capturing the "impulse" could be completely different. It was at this point that I knew I would create a convolution effect with an interface built around sampling. Furthermore, the idea of convolving two real-time audio signals seemed like it would produce incredibly unusual results. I sought to research different applications of audio convolution that could help me with creating my effect.

I decided early on that I wanted to use Max4Live to build this effect. Luckily, I found a set of Max externals developed for a variety of convolution related tasks distributed under a modified BSD license: the HISSTools Impulse Response Toolbox. After inspecting all of their externals in Max, I found that the multiconvolve~ object was most relevant to my design. This object combines "time domain convolution for the early portion of an IR with more efficient FFT-based partitioned convolution for the latter parts of the IR." Because of the multiconvolve~ object's design, it cannot convolve two real-time signals and needs the "IR" to be loaded into a Max buffer~ object. With this in mind, I began focusing exclusively on designing the sampling paradigm.

Phase 2: Designing and Prototyping

Phase 3: Building and Tuning

Reflections / Future Plans