Difference between revisions of "250a Accelerometer Lab"

From CCRMA Wiki
Jump to: navigation, search
(Make Some (musically-expressive, gesture-controlled) Noise!)
(Naive Gesture Detection and Thresholding)
Line 31: Line 31:
 
Since our accelermeter data is discrete in time (i.e. we get one value every some number of milliseconds), we can approximate derivation by taking the difference between successive values. (Technically, this is a "one-zero highpass filter.") You can use the included delta abstraction, which simply returns the difference between subsequent input values.  
 
Since our accelermeter data is discrete in time (i.e. we get one value every some number of milliseconds), we can approximate derivation by taking the difference between successive values. (Technically, this is a "one-zero highpass filter.") You can use the included delta abstraction, which simply returns the difference between subsequent input values.  
  
Start with accel+osc and connect a delta object to one or more acceleration values, pick a threshold that corresponds to a satisfying level of jerkiness, and use threshold (Pd) or past (max) to make a sound when you exceed the threshold. You can give the user additional control of the sound based on the direction and/or the magnitude of the jerk, if you like.
+
Start with accel+osc and connect a delta object to one or more acceleration values, pick a threshold that corresponds to a satisfying level of jerkiness, and use 'threshold' in Pd (or 'mapping/threshold' if 'threshold' doesn't work) or 'past' in max to make a sound when you exceed the threshold. You can give the user additional control of the sound based on the direction and/or the magnitude of the jerk, if you like.
  
 
Congratulations, you have now written a jerk detector.
 
Congratulations, you have now written a jerk detector.

Revision as of 19:05, 13 October 2009

Lab 4: Accelerometers, Audio Filters, and (optionally) Multitouch
Due on Wednesday, October 21th at 5PM

For this lab you need an iPod Touch (loaners are available) or an iPhone running TouchOSC, and Max/MSP or Pd on a computer.

Get Connected and Get Oriented

iPod Touches, like many newer portable electronic devices, have a 3-axis accelerometer in them, which allows designers to take into account both orientation of the device with respect to gravity as well as detecting physical gestures that are made with the phone.

For this lab, instead of writing our own iPod applications (the subject of an entire course), we will use an iPod app called TouchOSC to send accelerometer data from the iPod to max or pd, where we will process the data and make sound. TouchOSC is installed on the iPods available to use in this lab.

(If you prefer to use your own iPod or iPhone, you are welcome to use one of the other apps which perform similar functions. Here is a review of some options: http://heuristicmusic.com/blog/?p=124.)

Get the iPod talking to your computer via Open Sound Control

  • Make sure your computer and iPod are on the same network.
    • You may need to log your iPod into CCRMA Guestnet. To do this, open safari and try to access a new web page. If you can, you are logged in. If you can't you will be asked to login.
  • Find out the name or IP address of your computer.
  • On the iPod start TouchOSC and press the small 'i' to get to preferences. Select 'Network' and set Host to the name of your computer (e.g. 'cmn37.stanford.edu' or 'mylaptop.local'.)
  • Set the outgoing port to 8000.
  • Open accel_osc.pd (INSERT LINK AND MAX PATCH HERE), and make sure that accelerometer messages from TouchOSC are being received in Pd/Max.
  • use printing to examine the incoming OSC messages.

Get Oriented

  • Look at the acceleration values and graphs as you move the iPod around.
  • What are the units that acceleration is reported in?
  • Figure out the direction and orientation of each (x,y,z) accelerometer axis. How do you do this?
  • Draw a picture of the x,y, and z axes and their orientation as they relate to the iPod. (For lab submission you can include this picture or describe verbally what you discover.)

Naive Gesture Detection and Thresholding

Here's a way to make a simple gesture detector. One obvious difference between fast jerky movements and slow gradual movements is sudden jumps in the acceleration values. As discussed in lecture, jerk is the derivative of acceleration.

Since our accelermeter data is discrete in time (i.e. we get one value every some number of milliseconds), we can approximate derivation by taking the difference between successive values. (Technically, this is a "one-zero highpass filter.") You can use the included delta abstraction, which simply returns the difference between subsequent input values.

Start with accel+osc and connect a delta object to one or more acceleration values, pick a threshold that corresponds to a satisfying level of jerkiness, and use 'threshold' in Pd (or 'mapping/threshold' if 'threshold' doesn't work) or 'past' in max to make a sound when you exceed the threshold. You can give the user additional control of the sound based on the direction and/or the magnitude of the jerk, if you like.

Congratulations, you have now written a jerk detector.

Audio Filtering

The purpose of this part of the lab is to get a sense for the effect of different kinds of filters, and to start thinking about (audio) signals as being comprised of frequency components. Don't worry, we'll come back to accelerometers later.

This section does not use the iPod. You may want to quit TouchOSC to save battery life.

Open the pd patch
audio-filters/filter-demo

This patch allows you to select one of four input sources (white noise, a sine wave, a pair of sine waves, or a collection of oud samples) and pass the sound through one of seven possible filters:

  • No filtering
  • High pass filtering with Pd's (one-pole) hip~ object
  • High pass filtering with a "cascade" of four hip~ objects
  • Low pass filtering with Pd's (one-pole) lop~ object
  • Low pass filtering with a cascade of four lop~ objects
  • Band pass filtering with Pd's bp~ object
  • Band pass filtering with a cascade of Pd's bp~ objects

(EDIT THESE FOR MAXMSP!!!)

Play with this patch to get a feeling of the effect of different kinds of filters on different input sounds.

Start with the white noise source. (Be very careful with the output gain! White noise is extremely loud per unit of amplitude!) This is the best input for hearing the differences between different kinds of filters because it contains all frequencies. (It's called "white" noise by analogy to white light, which contains all frequencies, i.e., all colors of light.) Turn the master volume and/or your headphones way down, then select input source zero (white noise) and filter type zero (unfiltered). Beautiful, huh?

Now step through the other six filter types, playing with the parameters of each. Sweep the high-pass cutoff frequency. Sweep the cascaded high pass cutoff frequency and note that the four filters have "four times as much" effect on the sound as the single hip~ object. Ditto for the low pass objects. For the band pass, start with the default Q factor of 1 and sweep the center frequency. Then make the Q factor small and sweep the frequency again. Then make the Q factor large and sweep the frequency again. Now you know what these filters do.

Repeat all of the above on the single sine wave. Note that no matter what filtering you do, all you change is the gain (and phase) of the sine wave. (Geek moment: the reason is because all of these filters are "linear and time-invariant".) This is very important: filters don't add anything; they just change the balance of what's already there. Note that lowpass filtering reduces the volume of high frequency sine waves but has less effect on the volume of low frequency sine waves, etc.

Now try this on a pair of sine waves spaced pretty widely apart in frequency (for example, 100 Hz and 2000 Hz). Hear how the different filters affect the relative volumes of the two sine waves.

Finally, play some of the oud samples (via the same QWERTY keyboard triggering mechanism) through various filters. Experiment with transposition and how it interacts with filtering. In particular, transpose the samples down by a large amount and see how highpass cuts all the sound (as with a low-frequency sine wave), while lowpass emphasizes the "bassiness" of the sound.

Filtering Acceleration Data to Distinguish Tilt from Sudden Motion

  • Relaunch TouchOSC and open the guppy patch.
  • Turn on OSC in the patch, then turn on audio, and then holding the iPod at a neutral position hit the calibrate button and wait a few seconds. You may have to do the callibration a few times until the tilt values are in the range of [-1,1].
  • Now move the iPod around and note that the tilt appears pretty much exclusively in the "tilt" outputs, and that the sudden motion appears pretty much exclusively in the "sudden motion" outputs. Amazing! How do they do that?

The answering is with filtering. Caveat: although we believe filtering is the best way to solve this gesture discrimination problem, this particular implementation is somewhat of a hack. The reason is that all of Pd/max's filtering tools work only on audio signals, so the guppy patch (in particular, the accel-xover subpatch) converts the incoming OSC messages into audio signals, smooths them out, then lowpasses and highpasses them (at 5 and 20 Hertz, respectively) to differentiate tilt (the low frequency component) from sudden movements (which have lots of high frequency components).

The moral of the story is that control signals have frequency components too, just like audio signals, and you've got a lot more power if you can think about them in the "frequency domain", just like it's powerful to think about audio signals in the frequency domain. Great. Now go read Julius Smith's books.

  • Experiment with different cutoff frequencies for the crossover
  • Examine briefly the stillness detector in the lower right corner. How does this work?

Make Some (musically-expressive, gesture-controlled) Noise!

Put it all together. Create an interaction in which sound is controlled by physical gesture in some way that you find interesting. You can begin by conjoining guppy and filter-demo if you like, but you are welcome to use any method for analyzing accelerometer data or creating sound.

Think about the relationship you want to enable between music and sound. Are the qualities of movement reflected in the qualities of the sound? Is this important to you?

Make sure to use appropriate mappings from measured quantities to sound parameters. For example if you are controlling the frequency of an oscillator from left/right tilt, you may want to first calculate the angle of tilt from acceleration, and then map logarithmically to frequency.

Some possibilities you may want to explore:

  • Invent a specific gesture, and then figure out how to detect it.
  • Include in your interaction the use of the sliders, buttons, or multi-touch 2D sliders in TouchOSC. You will need to figure out what OSC messages are being sent.
  • Is there a way to get velocity or position from acceleration?

Please demo your result to the instructors, and in your lab writeup describe how you approached this (open-ended) design problem and what techniques you used to implement it.


(This lab was written by Luke Dahl on 10/13/09. Huge portions were imported from Michael Gurevich's accelerometer lab from 2007 and before.)