250a Accelerometer Lab
From CCRMA Wiki
Lab 4: Accelerometers, and Audio Filters
Due on Wednesday, October 20th at 5PM
Set up for lab
Download this software to your Beagleboard, and unzip it: lab4 zip. You can do this by typing 'wget https://ccrma.stanford.edu/courses/250a/labs/lab4.tgz' at the command prompt, and untarring it with 'tar zxf lab4.tgz'.(If you're having issues with your Beagleboard, do this on your laptop or on a CCRMA computer).
Flash your Arduino with Standard Firmata.
Set up your accelerometer so that the XYZ outs are going into A0, A1 & A2.
How to recognize a jerk
Here's a way to make a simple gesture detector. One obvious difference between fast jerky movements and slow gradual movements is sudden jumps in the acceleration values. As discussed in lecture, jerk is the derivative of acceleration.
Since our accelerometer data is discrete in time (i.e. we get one value every some number of milliseconds), we can approximate derivation by taking the difference between successive values. (Technically, this is a "one-zero highpass filter.") Look at the included delta abstraction, which simply returns the difference between subsequent input values.
After taking the difference you can detect when the difference is greater than some threshold.
- Start with accel.pd.
- In pd you can use 'delta' to find the difference and the object 'threshold' (or 'mapping/threshold' if it doesn't recognize 'threshold').
- Have your patch make a sound when the threshold has been surpassed.
- You can give the user additional control of the sound based on the direction and/or the magnitude of the jerk, if you like.
Congratulations, you have now written a jerk detector.
- Modify the jerk detector to account for the fact that our accelerometer is a 3.3V device.
The purpose of this part of the lab is to get a sense for the effect of different kinds of filters, and to start thinking about (audio) signals as being comprised of frequency components. Don't worry, we'll come back to accelerometers later.
Open the pd patch audio-filters/filter-demo.
This patch allows you to select one of four input sources (white noise, a sine wave, a pair of sine waves, or a collection of oud samples) and pass the sound through one of seven possible filters:
- No filtering
- High pass filtering
- High pass filtering with a "cascade" of four hip~ objects
- Low pass filtering
- Low pass filtering with a cascade of four lop~ objects
- Band pass filtering
Play with this patch to get a feeling of the effect of different kinds of filters on different input sounds.
Start with the white noise source. (Be very careful with the output gain! White noise is extremely loud per unit of amplitude!) This is the best input for hearing the differences between different kinds of filters because it contains all frequencies. (It's called "white" noise by analogy to white light, which contains all frequencies, i.e., all colors of light.) Turn the master volume and/or your headphones way down, then select input source zero (white noise) and filter type zero (unfiltered). Beautiful, huh?
Now step through the other six filter types, playing with the parameters of each. Sweep the high-pass cutoff frequency. Sweep the cascaded high pass cutoff frequency and note that the four filters have "four times as much" effect on the sound as the single hip~ object. Ditto for the low pass objects. For the band pass, start with the default Q factor of 1 and sweep the center frequency. Then make the Q factor small and sweep the frequency again. Then make the Q factor large and sweep the frequency again. Now you know what these filters do.
Repeat all of the above on the single sine wave. Note that no matter what filtering you do, all you change is the gain (and phase) of the sine wave. (Geek moment: the reason is because all of these filters are "linear and time-invariant".) This is very important: filters don't add anything; they just change the balance of what's already there. Note that lowpass filtering reduces the volume of high frequency sine waves but has less effect on the volume of low frequency sine waves, etc.
Now try this on a pair of sine waves spaced pretty widely apart in frequency (for example, 100 Hz and 2000 Hz). Hear how the different filters affect the relative volumes of the two sine waves.
Finally, play (some of) the oud sample(s) through various filters.
Filtering Acceleration Data to Distinguish Tilt from Sudden Motion
- Open the guppy.pd patch.
- Turn on the input messages in the patch, then turn on audio, and then holding the accelerometer at a neutral position hit the calibrate button and wait a few seconds. You may have to do the calibration a few times until the tilt values are in the range of [-1,1].
- Now move the accelerometer around and note that the tilt appears pretty much exclusively in the "tilt" outputs, and that the sudden motion appears pretty much exclusively in the "sudden motion" outputs. Amazing! How do they do that?
The answering is with filtering. Caveat: although we believe filtering is the best way to solve this gesture discrimination problem, this particular implementation is somewhat of a hack. The reason is that all of Pd's filtering tools work only on audio signals, so the guppy patch (in particular, the accel-xover subpatch) converts the incoming data into audio signals, smooths them out, then lowpasses and highpasses them (at 5 and 20 Hertz, respectively) to differentiate tilt (the low frequency component) from sudden movements (which have lots of high frequency components).
The moral of the story is that control signals have frequency components too, just like audio signals, and you've got a lot more power if you can think about them in the "frequency domain", just like it's powerful to think about audio signals in the frequency domain. Great. Now go read Julius Smith's books: filters mdft pasp sasp.
- Experiment with different cutoff frequencies for the crossover
- Examine briefly the stillness detector in the lower right corner. How does this work?
Make Some (musically-expressive, gesture-controlled) Music!
Put it all together. Create an interaction in which sound is controlled by physical gesture in some way that you find interesting. You can begin by conjoining guppy and filter-demo if you like, but you are welcome to use any method for analyzing accelerometer data or creating sound.
Think about the relationship you want to enable between music and sound. Are the qualities of movement reflected in the qualities of the sound? Is this important to you?
Make sure to use appropriate mappings from measured quantities to sound parameters. For example if you are controlling the frequency of an oscillator from left/right tilt, you may want to first calculate the angle of tilt from acceleration, and then map logarithmically to frequency.
Some possibilities you may want to explore:
- Invent a specific gesture, and then figure out how to detect it.
- Include in your interaction the use of the sliders, buttons, or other Arduino inputs. You will need to figure out how to add the capability of setting up and receiving the values from the Arduino.
- Is there a way to get velocity or position from acceleration?
Please demo your result to the instructors, and in your lab writeup describe how you approached this (open-ended) design problem and what techniques you used to implement it.
(This lab was written by Luke Dahl on 10/13/09, and modified by Wendy Ju on 10/13/10. Huge portions were imported from Michael Gurevich's accelerometer lab from 2007 and before.)