Idea / Premise
Sound Explorer is an environment for exploring and shaping sounds in real time.
As I looked at the waterfall plots generated by sndpeek and previous assignments in this class, it was fascinating so much information about the audio being played in one glance. Correlating the audio being heard with the visual display gave me a better understanding of time and frequency properties of audio. I believe that understanding would deepen if the user were able to interact with the waveforms, modify them, and hear and see the results.
Sound Explorer will allow the user to interactively construct and shape sets of waveforms. The results will be displayed on the waterfall display and played back in real time. The ways in which the waveform can be shaped are:
- Frequency domain:
- Generate a harmonic series starting at a given frequency
- Control the amount of non-harmonicity (i.e. how much the partials deviate from multiples of the the base frequency).
- Generate white noise
- Draw and apply spectral envelope
- Time domain:
- Draw and apply a time-domain envelope
In addition, for most of the above shaping methods, there will be a way to control which part of the waveform to apply them to. For example, the it will be possible to apply a time-domain envelope to a subset of the spectrum.
- The interface is made out of three elements: graphical display, keyboard input, and mouse input.
- Graphical display
- The screen is divided into three parts
- Waterfall display: a waterfall plot of the audio currently being played
- Edit window: this is where the user manipulates the waveform. At any time, this window is either in "additive mode" or "envelope mode". Each of those modes be in the frequency or time domain.
- Apply-to window: in this window, the user highlights which portion of the audio to apply the edit to. The window can be in the time or frequency domain.
- Mouse input: is used to draw envelopes and select ranges (in the apply-to window).
- Keyboard input: used to control modes and various parameters.
- The program will use OpenGL for graphics, RtAudio for audio, and FFT routines from the Chuck.
- I will attempt to construct the program using the model / view / controller design patten. The model, for example, will contain the current (and next) set of waveforms, the current envelope values, and the range and domain(s) to which the envelope(s) is applied.
- Real Time interaction
- The end goal is to have the user's interactions reflected in audio and graphics immediately. Initially, however, there may be two steps involved: 1) edit the wave:form 2) apply the changes and hear/see them.
The software will be tested by letting a user try it out and evaluate the:
- flexibility / expressiveness
- sound quality
- sound-annoyingness level
- Waterfall window; "Time domain edit" window; audio rendering.
- Apply-to window
- Frequency domain processing:
- "Frequency-domain edit" window
- More sophisticated DSP: overlap add
- Add harmonic series