Difference between revisions of "Sound Explorer"

From CCRMA Wiki
Jump to: navigation, search
Line 26: Line 26:
 
:: The screen is divided into three parts
 
:: The screen is divided into three parts
 
::* Waterfall display: a waterfall plot of the audio currently being played
 
::* Waterfall display: a waterfall plot of the audio currently being played
::* Edit window: this is where the user manipulates the waveform. At any time, this window is either in "additive mode" or "envelope mode". Each of those modes can apply to the frequency or time domain.
+
::* Edit window: this is where the user manipulates the waveform. At any time, this window is either in "additive mode" or "envelope mode". Each of those modes be in the frequency or time domain.
::* Apply-to window: in this window, the user highlights which portion of the audio to apply the edit to. The window can apply to the time or frequency domain depending on the current mode of the edit window (this window will be in the ''opposite'' mode).
+
::* Apply-to window: in this window, the user highlights which portion of the audio to apply the edit to. The window can be in the time or frequency domain.
 +
:* Mouse input: is used to draw envelopes and select ranges (in the apply-to window).
 +
:* Keyboard input: used to control modes and various parameters.
  
 
* Software
 
* Software
Line 33: Line 35:
 
:* I will attempt to construct the program using the model / view / controller design patten. The model, for example, will contain the current (and next) set of waveforms, the current envelope values, and the range and domain(s) to which the envelope(s) is applied.
 
:* I will attempt to construct the program using the model / view / controller design patten. The model, for example, will contain the current (and next) set of waveforms, the current envelope values, and the range and domain(s) to which the envelope(s) is applied.
 
* Real Time interaction
 
* Real Time interaction
**  The end goal is to have the user's interactions reflected in audio and graphics immediately. Initially, however, there may be two steps involved: 1) edit the waveform 2) apply the changes and hear/see them.
+
**  The end goal is to have the user's interactions reflected in audio and graphics immediately. Initially, however, there may be two steps involved: 1) edit the wave:form 2) apply the changes and hear/see them.
  
 
== Testing ==
 
== Testing ==

Revision as of 16:08, 11 November 2009

Sound Explorer

Idea / Premise

Sound Explorer is an environment for exploring and shaping sounds in real time.

Motivation

As I looked at the waterfall plots generated by sndpeek and previous assignments in this class, it was fascinating so much information about the audio being played in one glance. Correlating the audio being heard with the visual display gave me a better understanding of time and frequency properties of audio. I believe that understanding would deepen if the user were able to interact with the waveforms, modify them, and hear and see the results.

Product Description

Sound Explorer will allow the user to interactively construct and shape sets of waveforms. The results will be displayed on the waterfall display and played back in real time. The ways in which the waveform can be shaped are:

  • Frequency domain:
  • Generate a harmonic series starting at a given frequency
  • Control the amount of non-harmonicity (i.e. how much the partials deviate from multiples of the the base frequency).
  • Generate white noise
  • Draw and apply spectral envelope
  • Time domain:
  • Draw and apply a time-domain envelope

In addition, for most of the above shaping methods, there will be a way to control which part of the waveform to apply them to. For example, the it will be possible to apply a time-domain envelope to a subset of the spectrum.


Design

  • Interface:
The interface is made out of three elements: graphical display, keyboard input, and mouse input.
  • Graphical display
The screen is divided into three parts
  • Waterfall display: a waterfall plot of the audio currently being played
  • Edit window: this is where the user manipulates the waveform. At any time, this window is either in "additive mode" or "envelope mode". Each of those modes be in the frequency or time domain.
  • Apply-to window: in this window, the user highlights which portion of the audio to apply the edit to. The window can be in the time or frequency domain.
  • Mouse input: is used to draw envelopes and select ranges (in the apply-to window).
  • Keyboard input: used to control modes and various parameters.
  • Software
  • The program will use OpenGL for graphics, RtAudio for audio, and FFT routines from the Chuck.
  • I will attempt to construct the program using the model / view / controller design patten. The model, for example, will contain the current (and next) set of waveforms, the current envelope values, and the range and domain(s) to which the envelope(s) is applied.
  • Real Time interaction
    • The end goal is to have the user's interactions reflected in audio and graphics immediately. Initially, however, there may be two steps involved: 1) edit the wave:form 2) apply the changes and hear/see them.

Testing

The software will be tested by letting a user try it out and evaluate the:

  • flexibility / expressiveness
  • sound quality
  • sound-annoyingness level

Team

Roy Fejgin

Milestones

  • 11/16/09:
  • Waterfall window; "Time domain edit" window; audio rendering.
  • 11/23/09:
  • Apply-to window
  • 12/07/09:
  • Frequency domain processing:
  • "Frequency-domain edit" window
  • More sophisticated DSP: overlap add
  • Add harmonic series