Difference between revisions of "256a-fall-2009/shiweis-project"

From CCRMA Wiki
Jump to: navigation, search
 
(11 intermediate revisions by the same user not shown)
Line 35: Line 35:
  
 
=== Software System ===
 
=== Software System ===
The software is designed to be extensible with swappable parts. It includes the vision system for capturing and recognizing drawings, the synthesis system that parses detected shapes and generates sound, and the main routine for tying everything up and display visualization. The hope is that one can modify or upgrade one of these systems without breaking the "loop."  
+
The software is designed to be extensible with swappable parts. It includes the vision system for capturing and recognizing drawings, the synthesis system that parses detected shapes and generates sound, and the main routine for tying everything up and display visualization. When idle, the software simply captures camera input and displays the video feed. When the user presses the capture key, the software goes through the following steps:
 +
*Camera stores the picture and pass it to the vision system for recognition.
 +
*Vision system passes the results back to main routine.
 +
*The picture with detection results is displayed. The results are also passed to the synthesis system for parsing and storage.
 +
*The synthesis system converts the vision results into internal music commands and starts playing them in a loop.
 +
*Meanwhile, the main routine syncs with the synthesis system and displays visualizations based on what is currently playing.
 +
The user can repeat the above process. The hope is that one can modify or upgrade one of these systems without breaking the "loop."  
 +
 
 +
The software uses the following libraries/technologies:
 +
*OpenGL for display and rendering visualizations.
 +
*OpenCV for vision/symbol detection.
 +
*STK for instrument sound synthesis.
 +
*RtAudio for producing sound output.
 +
*SOIL for loading png textures.
 +
 
 +
Each component of the software is described below.
 +
 
 +
==== Vision ====
 +
The vision system is powered by OpenCV and converts a still image of the user's drawing into detected shapes. It takes an image as input and returns a vector of results, each consisting of: shape type, bounding rectangle, and (for curves) points along the contour. The system consists of the 'Detector' class and its subclasses. Each subclass can implement its own method for detection as long as it still returns the results in the same format.
 +
 
 +
I tried several detection methods and ended up using Hu moments of contours for detection. At the initialization of the detector, templates of shapes to detect are loaded and their Hu moments are calculated. To detect shapes in a new image, first, the input image is converted to a binary edge image using the Canny edge detector. From this, contours (shapes) are detected and stored in an hierarchal tree structure. The bounding boxes for the contours are extracted and stored in the result vector. Shape types are identified by picked the templates with the closest Hu moment to the contours. If a shape is labeled as a curve, then its path is extracted and also stored in the results vector. Finally this vector is returned.
 +
 
 +
==== Synthesis ====
 +
The synthesis system uses STK to generate sounds for different instruments. It takes the results from the vision system as input and parses into internal music commands. The system consists of the 'Synth' class and its subclasses. Each subclass can implement its parsing and mapping to STK.
 +
 
 +
The input results are first sorted by their x-positions (from left to right). Then, for each shape starting from the left, it pushes the corresponding commands to the command queue. A command contains information about the instrument state (clarinet_on), frequency, and duration (until next command). For each sample, the synthesizer calls the STK commands and 'tick' that corresponds to the current command in the command queue.
 +
 
 +
==== Main Routine ====
 +
The main routine is responsible for passing information around between camera, vision, and synthesis systems. It also deals with getting keyboard inputs and displaying visualization. It uses OpenGL for keyboard input and display.
 +
 
 +
For each frame captured, it converts the frame from OpenCV image representation to OpenGL texture and displays it onto the screen. When the user captures an image, it passes the image to the detector and synthesizer. While sound is playing, it asks the synthesizer for details about which shape the current sound corresponds to. From this, it generates a simple visualization consisting of music note particles flying out of the shape that's being played.
 +
 
 +
The simple particle system is implemented as a vector of particle position, color, and transparency. A new particle is displayed for every several frames. For every frame the particle's position and transparency is updated. A particle's color and initial position are set when they are first displayed. The implementation is somewhat similar to the waterfall plot implementation in assignment 2.
  
 
== Experience ==
 
== Experience ==
 +
The user interacts with the system through pen, paper, and webcam. When the software is launched, the user will see the webcam feed along with a green box indicating detection area.
  
== Milestones & Extensions ==
+
[[File:shot1.png]]
*11/16/2009: Basic language ideas. Ability to recognize some characters in the vision component.
+
 
*11/30/2009: Can recognize most of the characters and convert to internal representation. Can produce sound.
+
User starts off by drawing arrangements of shapes as described in the Language section above. The shapes can be arranged freely on the page. After this is done, the user simply holds the paper up to the webcam.
*12/10/2009: Some visuals. Make the other components more robust. Ready for presentation.
+
 
 +
[[File:shot2.png]]
 +
 
 +
When the user presses the spacebar, the current frame is captured and fed into the system for detection and synthesis. Once that is complete, the system will start playing the corresponding sound and display a simple visualization consisting of music note particle floating out of the shape being sonified.
 +
 
 +
[[File:shot3.png]]
 +
 
 +
When the user can press the spacebar again to go back to the video feed menu and repeat the process.
 +
 
 +
Besides picking the arrangement of shapes to draw, the user can further explore interaction with the system by scanning the drawing in different ways. Since orientation of the shapes doesn't matter, by holding the paper upside down, the system will produce a reverse version of the original sound. By holding the paper further up or down relative to the camera, the user can increase or decrease the overall pitch of the sound. By tilting the paper at different angles, relative pitch of the notes will change (and in some cases their ordering will change also).
 +
 
 +
== Milestones & Future Work ==
 +
The milestones for this project were
 +
*11/16/2009: Basic system setup with simple detector and synthesizer with no visualization.
 +
*11/30/2009: Detector complete. Complex synthesizer supporting all shapes in the language. Integration with OpenGL and trivial visualization.
 +
*12/10/2009: System tweaks, refactoring, simple particle visualization.  
 +
 
 +
This project is just the beginning of a system that can have many extensions. Some of the possible future work includes:
 +
*More complex language with loops/track/speed control, combined symbol behavior, more complex arrangements, colors.
 +
*More advanced synthesizer supporting pitch correction, semi-auto music generation, more instruments, polyphony, tracks.
 +
*Experiment with different visualizations, such as making it into a game, animated story, interactive visualization.
  
 
== Results ==
 
== Results ==
 +
While testing and building the system, I found a few things that did occur to me before:
 +
*It is actually very difficult to make something sound good without some sort of automated music generation.
 +
*In practice when "live coding" it is tedious to draw complex shapes or switch pens (to get different colors). So the language should be fairly simple and painless to draw.
 +
 +
I also found some interesting usage cases mentioned in the experience section, including:
 +
*Flipping the paper upside down to play arrangement backwards.
 +
*Moving the paper up or down to increase or decrease the pitch.
 +
*Tilting the paper to change the notes and ordering of the arrangement.
 +
 +
For example the drawing below will produce [https://ccrma.stanford.edu/~shiweis/256a/project/20091210%20014355.m4a this] while flipping with will produce [https://ccrma.stanford.edu/~shiweis/256a/project/20091210%20014502.m4a this].
 +
 +
[[File:demo1.png]]
 +
 +
Also the drawing below will produce [https://ccrma.stanford.edu/~shiweis/256a/project/20091210%20014624.m4a this], tilting it to the left 45 degrees will produce [https://ccrma.stanford.edu/~shiweis/256a/project/20091210%20014725.m4a this], and tilting it to the right 45 degrees will produce [https://ccrma.stanford.edu/~shiweis/256a/project/20091210%20014806.m4a this].
  
=== Picture Music Language ===
+
[[File:demo2.png]]
This project will construct a simple form of the picture music language. The primary concern is to have a good mapping of the pictures to representations and also have it easily recognizable by the software (so perhaps sacrificing picture meaning/aesthetics for ease of recognition.
+
  
The language will control two aspect of the music:
 
*The controls/flow of the sound produced. Such as indicating loops, tracks, speed, etc.
 
*Actual notes/instruments and pauses (lack of notes).
 
  
Some design questions I'm still debating:
 
*Whether to have notes from same instrument or selected notes from several instruments.
 
*Whether to allow polyphony, and if so throw what means? Either having the user input different tracks (probably easier) or have use some spacial mapping in the language (such as position on paper).
 
*What should color and shape map to? Should vertical position on paper be used?
 
  
=== Software System ===
+
== Source ==
The software system will consist of several parts:
+
The source can be downloaded at http://github.com/InsipidPoint/pictone
*The vision system that uses OpenCV and digitalizes the characters.
+
*The internal controller that takes the digitalized representation and produces the synthesized sound (maybe through STK) and also display the visualization.
+
*The visual system that shows the camera feed and also some sort of visualization (perhaps simply the notes to be played and the organization of different tracks).  
+
  
There are currently no plans to have any networked component to this.
+
note: because of last minute edits and tweaks, the current code is not very "clean."

Latest revision as of 03:04, 10 December 2009

Pictone - Vision Based Music System

Music 256A Fall 2009 Final Project. Shiwei Song (shiweis)

Introduction

Pictone.png

Modern computers have enabled novel ways for people to interact with and generate sound. My project is to build an instrument where the inputs are pictures drawn with pen and paper. The system takes video feed of simple shapes and lines drawn on paper and converts them into notes of different instruments. The audience sees the performer's drawings, visualization of notes, and hears the corresponding sonic results.

My initial inspiration came from the Birds on the Wires video. The creator converted a picture of birds on wires to musical notes and produced a short piece of music that is both simple and beautiful. I felt that creating sound through drawings has the following advantages:

  • Most people can write or draw simple shapes. This system invites anyone to try and make some music/noise (and have fun!).
  • The pictorial music language is novel to the audience, so they will be constantly surprised or left wondering what the next output will be.
  • The characters/pictures in the music language can itself have some sort of meaning (perhaps when combined together) so there will be both visual and sonic messages conveyed.
  • "Live coding" by drawing is fairly unconventional and this itself may be an interesting experiment to perform or observe.
  • Drawing has a lot of freedom that cannot be mapped to traditional hardware devices. The performer can further influence the system by change the way images are taken by the system such as by rotating the paper. Combined, the performer has a lot of room for interaction with the instrument.

I hope that my project will serve as the basis for a system that will eventually demonstrate the advantages listed above.

System

Although the majority of the work for this project was on creating the software, the project can be viewed as having two disjoint parts: the picture language and the software system.

Language

The first component of the project is to figure out what type of things the user can draw and what type of results they might produce. Because of the time constraints, the language supported so far is fairly simple/limited. The language consists of the following symbols.

Language.png

Although the interpretation of the language is flexible in the system, below is the description of what its mappings are as of now.

  • Triangle: plucked string note.
  • Square: saxophone note.
  • Slant: clarinet note.
  • Star: short pause/silence.
  • Pin/Lollipop: blowhole note.
  • Arbitrary Curve: this is a modifier symbol that does nothing by itself. When combined with other symbols, it modifies the pitch of the previous note depending on its curvature. It's as if the previous note is riding a roller coaster.

The orientation that the above shapes are drawn is irrelevant. Their y-positions control the pitch of the notes.

Software System

The software is designed to be extensible with swappable parts. It includes the vision system for capturing and recognizing drawings, the synthesis system that parses detected shapes and generates sound, and the main routine for tying everything up and display visualization. When idle, the software simply captures camera input and displays the video feed. When the user presses the capture key, the software goes through the following steps:

  • Camera stores the picture and pass it to the vision system for recognition.
  • Vision system passes the results back to main routine.
  • The picture with detection results is displayed. The results are also passed to the synthesis system for parsing and storage.
  • The synthesis system converts the vision results into internal music commands and starts playing them in a loop.
  • Meanwhile, the main routine syncs with the synthesis system and displays visualizations based on what is currently playing.

The user can repeat the above process. The hope is that one can modify or upgrade one of these systems without breaking the "loop."

The software uses the following libraries/technologies:

  • OpenGL for display and rendering visualizations.
  • OpenCV for vision/symbol detection.
  • STK for instrument sound synthesis.
  • RtAudio for producing sound output.
  • SOIL for loading png textures.

Each component of the software is described below.

Vision

The vision system is powered by OpenCV and converts a still image of the user's drawing into detected shapes. It takes an image as input and returns a vector of results, each consisting of: shape type, bounding rectangle, and (for curves) points along the contour. The system consists of the 'Detector' class and its subclasses. Each subclass can implement its own method for detection as long as it still returns the results in the same format.

I tried several detection methods and ended up using Hu moments of contours for detection. At the initialization of the detector, templates of shapes to detect are loaded and their Hu moments are calculated. To detect shapes in a new image, first, the input image is converted to a binary edge image using the Canny edge detector. From this, contours (shapes) are detected and stored in an hierarchal tree structure. The bounding boxes for the contours are extracted and stored in the result vector. Shape types are identified by picked the templates with the closest Hu moment to the contours. If a shape is labeled as a curve, then its path is extracted and also stored in the results vector. Finally this vector is returned.

Synthesis

The synthesis system uses STK to generate sounds for different instruments. It takes the results from the vision system as input and parses into internal music commands. The system consists of the 'Synth' class and its subclasses. Each subclass can implement its parsing and mapping to STK.

The input results are first sorted by their x-positions (from left to right). Then, for each shape starting from the left, it pushes the corresponding commands to the command queue. A command contains information about the instrument state (clarinet_on), frequency, and duration (until next command). For each sample, the synthesizer calls the STK commands and 'tick' that corresponds to the current command in the command queue.

Main Routine

The main routine is responsible for passing information around between camera, vision, and synthesis systems. It also deals with getting keyboard inputs and displaying visualization. It uses OpenGL for keyboard input and display.

For each frame captured, it converts the frame from OpenCV image representation to OpenGL texture and displays it onto the screen. When the user captures an image, it passes the image to the detector and synthesizer. While sound is playing, it asks the synthesizer for details about which shape the current sound corresponds to. From this, it generates a simple visualization consisting of music note particles flying out of the shape that's being played.

The simple particle system is implemented as a vector of particle position, color, and transparency. A new particle is displayed for every several frames. For every frame the particle's position and transparency is updated. A particle's color and initial position are set when they are first displayed. The implementation is somewhat similar to the waterfall plot implementation in assignment 2.

Experience

The user interacts with the system through pen, paper, and webcam. When the software is launched, the user will see the webcam feed along with a green box indicating detection area.

Shot1.png

User starts off by drawing arrangements of shapes as described in the Language section above. The shapes can be arranged freely on the page. After this is done, the user simply holds the paper up to the webcam.

Shot2.png

When the user presses the spacebar, the current frame is captured and fed into the system for detection and synthesis. Once that is complete, the system will start playing the corresponding sound and display a simple visualization consisting of music note particle floating out of the shape being sonified.

Shot3.png

When the user can press the spacebar again to go back to the video feed menu and repeat the process.

Besides picking the arrangement of shapes to draw, the user can further explore interaction with the system by scanning the drawing in different ways. Since orientation of the shapes doesn't matter, by holding the paper upside down, the system will produce a reverse version of the original sound. By holding the paper further up or down relative to the camera, the user can increase or decrease the overall pitch of the sound. By tilting the paper at different angles, relative pitch of the notes will change (and in some cases their ordering will change also).

Milestones & Future Work

The milestones for this project were

  • 11/16/2009: Basic system setup with simple detector and synthesizer with no visualization.
  • 11/30/2009: Detector complete. Complex synthesizer supporting all shapes in the language. Integration with OpenGL and trivial visualization.
  • 12/10/2009: System tweaks, refactoring, simple particle visualization.

This project is just the beginning of a system that can have many extensions. Some of the possible future work includes:

  • More complex language with loops/track/speed control, combined symbol behavior, more complex arrangements, colors.
  • More advanced synthesizer supporting pitch correction, semi-auto music generation, more instruments, polyphony, tracks.
  • Experiment with different visualizations, such as making it into a game, animated story, interactive visualization.

Results

While testing and building the system, I found a few things that did occur to me before:

  • It is actually very difficult to make something sound good without some sort of automated music generation.
  • In practice when "live coding" it is tedious to draw complex shapes or switch pens (to get different colors). So the language should be fairly simple and painless to draw.

I also found some interesting usage cases mentioned in the experience section, including:

  • Flipping the paper upside down to play arrangement backwards.
  • Moving the paper up or down to increase or decrease the pitch.
  • Tilting the paper to change the notes and ordering of the arrangement.

For example the drawing below will produce this while flipping with will produce this.

Demo1.png

Also the drawing below will produce this, tilting it to the left 45 degrees will produce this, and tilting it to the right 45 degrees will produce this.

Demo2.png


Source

The source can be downloaded at http://github.com/InsipidPoint/pictone

note: because of last minute edits and tweaks, the current code is not very "clean."