The goal of the Khatzotzra’ash instrument design is to allow a trumpet player to use his expressive technique to drive virtual instruments in a live setting. This is done by sampling the buzz of the mouthpiece and using it to drive physical models of said instruments, thus reintroducing the essential “noise” disregarded by contemporary MIDI controllers for sound synthesis.
Ink Drop is an audio/visual controller made from two containers of water. Using live camera feeds, visual designs made from ink on the surface inform a musical composition, while audio is analyzed in real time and informs visual effects. The final product is presented through speakers and a projection. Ink Drop is a controller which utilizes the slower continuous information in a visual composition and attempts to present a more unified audio/visual experience as both are synthesized by a single artist.
The original question this project aimed to answer was “How can a band most dynamically play drums without a drummer?” It has expanded into a general midi controller for your feet. Equipped with 4 force sensors and 2 sliders, each of these sensors can be mapped to any parameter in Ableton Live to control any effect, instrument or sound to add dynamics and other effects with your feet while playing guitar, keys or any other audio instrument.
It’s strange that we often enter automation data into our music production programs manually. We take forever to get everything feeling just right and also miss good possibilities when in this tedious “data entry” mindset. Existing midi controllers are one solution; however, most still involve buttons, knobs, and sliders to control continuous parameters. These interactions often lack the physical feedback and feel of a real musical instrument. The Elastic Fantastic uses stretchy, tension based interaction to control electronic instruments in real-time. By constricting movement wrists and hands, it purposefully limits a body’s freedom of motion to force focus onto subtle movements between the two forearms. The lit-up visual feedback engages and also clarifies behavior for both the player and an audience.
The Contrenot (pronounced “con-truh-no”) is a musical interface designed to loosely emulate the mechanics of a bowed upright bass. It’s inspiration comes from other electronic musical interfaces such as the Ondes Martenot, the Gametrak (as used by SLORK), and the Omatone. The Contrenot has a form factor similar to that of an electric upright bass. The neck consists of a high-resolution linear softpot and FSR for monophonic pitch and aftertouch detection. The body houses a pull-string sensor, created from a gutted tape measure spring, a high resolution incremental rotary encoder, and custom 3d printed parts. The pull-string sensor is able to detect velocity and motion very precisely, allowing for very nuanced control similar to bowing a bass. All sensors and components are powered by an Arduino Uno. Data is sent over USB serial to a Linux computer, where it processed and mapped to sound parameters in Sporth, a stack-based audio synthesis language. For more information, visit pbat.ch/proj/contrenot
Chris Lortie, Bradford Mills, Nick Virzi
The Cyber-Armonica is a crank-operated rotating cylinder containing 5 soft potentiometers, each containing a different band of sound. These soft pots are weaved diagonally around a plastic cylinder in a barbershop-pole type fashion, so that one may play the instrument across the horizontal plane as one might use a keyboard, while the sounds themselves slowly change as the instrument rotates vertically.
This project seeks to translate dance motion into expressive music through the meaningful sonification of data gathered from bodymounted sensors during live performance. Sensors include three 9degreeoffreedom IMUs, one on the head and each wrist, and two muscle sensors, one on each leg. Challenges have included transmitting data from all of the sensors to a computer running Max/MSP through an Arduinocompatible board using WiFi, and employing that data in the generation of compelling musical sound.
A. Ronneburg, G. Spellman, D. Mei, A. Kim
The goal of our project is to create a portable, intuitive vocoder that singers can use to enhance their performances in a way that both looks and feels good, and does not detract or distract an audience from the performance. Some of the biggest challenges in creating the vocoder gloves were making the electronics as unobtrusive as possible. It would have been easiest to use wires and connectors to create the sensor circuits, but instead we opted for conductive thread and conductive ribbon.
Walker Davis & Joe Ferriso
The Sonic Paintbrush seeks to convert color into sound through utilizing an RGB color sensor and bend sensor imbedded in the brush bristles. The handle of the brush also has an FSR that controls a kick drum tempo. The serial data transmitted through the sensors and Arduino are unpacked and organized to trigger composed notes within Max / MSP. The challenge is to make a compelling visual color composition translate into a comparable and satisfying sound composition.
Strings is an instrument that uses the natural motion of plucking and pulling on rubber strings to control sound in a way which is musically and haptically fulfilling. What this instrument brings that keys, knobs and faders lack is a sense of physicality which results from the elastic force of the rubber strings.
Jack Seibert and Ashwin Agarwal
In our interactive sound art piece, the listener is surrounded by a big structure and chaotic, free-falling balls. Over time, the listener becomes more of a creator as they learn to harness the machine. Our piece confronts the parts of life that seem unnerving or out of your control.
We’re excited to see the unique responses and interactions that each different person brings. We learned a lot from this project, from the woodworking to the programming to the artistic vision-ing, and hope you enjoy it.
The Musical Chair
Gabriel Barajas and Andreas Garcia
Our goal is to create an instrument/musical interface that allows for a significant amount of body interaction, while still having functionality similar to more traditional instruments. The chair is a melodyoriented instrument with keyboardlike controls, but with sound effects that can easily be manipulated by certain physical movements. It was most difficult to adapt intuitive sounds to each of the various movements possible in a chair. It was also a significant challenge to design our device so that it could be removable and adaptable to different chairs.
Ambisonic Mixing Bowl
Nick Gang, Wisam Reid
This project seeks to provide spatial audio artists and engineers with a tactile interface for real-time ambisonic panning. The spherical shape of the dome mimics the possible source locations surrounding the listener. The sound source symbols inform the user of the state of the system, and allow for immediate changes in three dimensions with one gesture.
Sonia YH Chen
The goal of this project is to make an instrument in case vocal cannot use their voice one day on the stage. In this instrument, the pitch, the timbre, and the vowel can be controlled by different gestures and the poses, which are somehow correlated to what a vocal usually does on the stage. In addition, a sound-responsive lightning microphone stand was also designed to light on the performance on the stage. The most difficult design challenges of the project is to make a good mapping logic between the gesture, the sound, and the light. In addition, the human voice-like sound generation process is challenged
ALISON RUSH, DAVID GRUNZWEIG, AND TRIJEET MUKHOPADHYAY
The Granuleggs is a new music controller for granular synthesis which allows a musician to explore the textural potential of their samples in a unique and intuitive way, with a focus on creating large textures instead of distinct notes. Each controller is egg shaped, designed to fit the curve of your palm as you gyrate the eggs and tease your fingers to find yourself the perfect soundscape.
Inter-String Time Delay Zither
The Inter-String Time Delay Zither is a plucked string instrument that changes its sound based on how fast you pluck it. Its strings are arranged into note pairs (groups of two strings that are tuned to the same note), and using a system of pickups under two bridges, it detects the difference in pluck time for the left and right strings of each note pair. The strings’ vibra- tions are picked up with piezos and routed through some audio processing in Max/MSP, where the inter-string time delay is used to drive audio effects like beating and distortion. The interaction of manipulating sound via the speed at which you pluck each note thus affords an additional level of control beyond those present in a traditional zither.
Griffin Stoller, Ned Danyliw, Brian Bolze
We are taking the familiar skeleton of a traditional MIDI keyboard and completely rethinking how sound is generated from a key press. Instead of simply starting and stopping a note (noteOn and noteOff in MIDI), our keyboard provides a continu- ous position and velocity reading of each key which opens up a whole new spectrum of live sound shaping. The key physical interaction we are trying to capture is the tiny, minute movements of the musicians fingers on the keys themselves, with- out having to turn any knobs or sliders on a panel or screen. Rather than implementing traditional synthesis techniques such as additive or subtractive synthesis, the sounds used in Continüa Key are derived from semi-modular wavetable synthesis. This means that waveshapes are blended and are changed continuously in sync with the motion and actions of the performer.
Music in Motion (MIM)
Max Farr, Tyler Sadlier
“Music in Motion” is an interactive procedural music generator that transforms color and motion into shifting sound composi- tions. Performers hold and move colored balloons in front of a webcam and a white backdrop. A Max/Jitter computer vision patch processes and sends the webcam’s visual information into multiple Max4Live sequencer and instrument patches. Different balloon colors are tied to procedural “instruments”––a given balloon’s color determines it timbre and the “sequence bank” it utilizes. The motion and position of the balloon control the filters, effects, and sequence parameters of the instru- ment. The instrument patches are designed to play in conjunction with one another, and so MIM yields the most interesting (and fun) results when multiple users participate at the same time.
Charlie Sdraulig and Sam Alexander
A single suspended cymbal, augmented with sensors attached via magnets. The sensors include a piezo contact micro- phone routed through an audio interface and an accelerometer sending data via an Arduino. The sensors trigger sample playback and manipulation within a Max/MSP patch. Coastal Distillation
Ethan Geller, Matt Horton
Perküssalz is a instrument that uses the voice to control the timbre of a drum set and vice versa. Using Piezo disks attached to drums and two vocal microphones, we are able to create a variety of sounds and methods of input, including simulated strings, tuned by the vocal input and struck by the drums. Other methods of inputs include various amplitude gates for the vocal inputs triggered by the drums playing, as well as a direct combination of the drum and vocal signals. This leads to a wonderful, collaborative instrument with a great many expressive opportunities.
A remote tide sensor collects the ocean’s undulations, from the smallest waves to seasonal tides. Uploaded continuously, distill days of wave information into subtle musical information live. Driven by an inexpensive, DIY tide gauge and connected locally via cellular data networking, the scale of the ocean can be leveraged in a variety of organic musical effects. Select a storm for powerful distortion or a foggy evening for ethereal reverb and utilize time scales from minutes to months as inter- changeably as grains of sand on a beach.
HumPad, Synthum, Padorama
Freddy Avis, Benjamin Williams
The HumPad performance model consists of two individual instruments – the Padorama, a footpad driven by foot location and pressure, and the Synthum, a throat sensor excited by vocal cord vibration and amplitude. Using both devices, the per- former to maximize live parameter control while minimizing movement and stage ego.
The Synthum was built to enable direct control of the frequency and amplitude of a virtual instrument by humming (or oth- erwise vocalizing). These parameters are detected from audio captured by a contact microphone affixed to the performers throat. The Synthum stands in contrast to vocoders and other voice transformers in that the timbre of the resulting sound is intended to be entirely separate from the timbre of the preformer’s voice – the voice is a controller for the synthesizer, rather than the synthesizer acting as a filter or transformation for vocal sound.
Padorama offers versatile, foot-triggered parameter control to enhance just about musical performance model. Using four FSR sensors, the Padorama tracks your feet’s X-Y location on its surface with LED lights while simultaneously reading the amount of pressure exerted into the pad. The result? Easy, continuous three-parameter control at the base of your feet, allowing for exceptional flexibility, efficiency, and creativity on stage and in the studio.
Synth Guitar is a synthesizer building on the concept of breaking the limitation of traditional guitar sound, while simplifying the playing technique to enable musician s produce creative pieces with easy. By constructing the synthesizer according to a guitar configuration instead of keyboard, several advantages of the guitar such as sliding the strings or the relative pattern of the strings is highlighted and preserved. In order to produce all kinds of synthesized sounds, we replace the strings of the guitar to linear potentiometer and force sensor to capture the data trigger by musician, and send the data into Max/Msp to produce desired sound and effects.
Victoria Grace, Joel Chapman
Sonic Anxiety is an ironic twist on performance anxiety, where the performance is the sound of my anxiety while locked in a cage. Sensors track my breathing to control the harmony and timbre while my pulse sets the pace and drum rhythms of the piece.
Velokeys is a velocity-sensitive QWERTY keyboard for desktop jamming. Millions of people spend every day training their brains with a QWERTY key layout – at work, at school, and at home. This project is meant to meld the expressivity of a piano key with the familiarity of a computer keyboard for those who want to bring a little more music to their typing.
Integrated controls of MIDI parameters using buttons mounted on knobs mounted on sliders mounted on LED-enhanced electronics mounted on acrylic.
String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.
The Processed Typewriter
For this project I am working towards a performance in the late spring during a residency with famed soprano Tony Arnold. Rather than a typical accompaniment for a solo soprano piece, like as a piano, I thought would be much more interesting and musically fertile to have her singing lyrics which are actively being typed in the background. Not only is the text being transformed into sound through the vocal line, but also the hammering away of the typewriter. With live processing the typewriter sound transforms from being static to being incredibly nuanced, allowing for a great deal of variation timbrally. The form is 32 microludes (short, but intense musical gestures lasting 12-30 seconds each). 32 preset sound presets, controls and settings already adjusted beforehand, provide the large scale shape between the microludes.
Tact is a project designed to make sound design and beat construction more intuitive. The instrument is a glove mounted with contact microphones that allows the wearer to record, transform and perform natural sounds at the touch of a finger. A wireless iPad interface provides the wearer with sound-shaping controls, playback effects and glove feedback. Amplify your interaction with the world via tactile sampling and contact playback with Tact. String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.
Tower of Power
Graham Davis, Connor Kelley
Tower of Power (ToP for short) is an interactive tower of wood that generates sound and sweet LED’s. Inspired by the Hunchback of Notre Dame and 1970s funk, ToP is the auditory column for our generation.
Byron Walker, Maria Malone, Jack Cook
The Electrocoustic Jellymuse is an interactive sound sculpture featuring a giant glowing, burbling jellyfish. You can play with it by touching the tentacles together, or pouring water on its head! Friendly and organic, the Jellymuse is fun for all ages.
Flex Effects is a glove designed for singers to easily and immediately apply effects to their vocals, for real-time output. Each finger controls the application of an effect. When the fingers are curled into a fist, each effect has minimal or no effect. When each finger is straightened, the associated effect is applied at a greater intensity.
Bass Guitar Pedal (BGP)
Michael Mendoza, Darrell Ford, Keanu Bellamy
Designed to create a music controller that can be effortlessly used by electric bass guitar players during a performance, the Bass Guitar Pedal will attempt to take advantage of the fact that electric bass guitar players do not use their feet during a performance. Using the BGP, the electric bass guitar player will be able to interact with various sliders, buttons, pedals, and light sensors to manipulate their live sounds while playing. The buttons, pedal, slider, and light sensors will modify reverb, echo and modulation effects, allowing the user to alter songs in real time.
Fang Yi Lin
Musicatini is a set of cocktails that include interaction and sound in its recipe. Each cocktail is an experience that requires all 5 senses to enjoy. The performance starts when the cocktail is being made, and ends when an audience member finishes drinking the ‘interactive’ cocktail.
My project is an adapted design of a Laser Harp. A laser harp is an archetypal musical instrumental, similar to a traditional Irish harp or lyre, but the strings of the traditional harp design are replaced with laser beams, each paired with individual photosensors. Rather than modifying a traditional harp body, instead I designed (with a lot of help from Romaine), modelled, and laser-cut a wooden dual-frame for the body of the harp. The Electrocoustic Jellymuse is an interactive sound sculpture featuring a giant glowing, burbling jellyfish. You can play with it by touching the tentacles together, or pouring water on its head! Friendly and organic, the Jellymuse is fun for all ages.
Alon Devorah, Gabriele Carotti-Sha, Andrew Forsyth
LET THERE BE SOUND!
…And there was sound.
The staff chooses the wizard. That much has always been clear to those of us who have studied the ancient runes. These connections are deep and complex. An initial attraction, and then a mutual quest for experience, the staff learning from the wizard, the wizard from the staff.
But what good are our mere words? Humans, dwarves, elves, and any other beings we are humbled and grateful to have in our presence here today, come! Let us show you. Let us share in the adventure and discovery. Let us take you away to a world where you will be immersed entirely in the magic of sound.
O^3: A Controller Based Around Concentric Circles
David Bordow, Erich Peske
Our controller is a prototype based around one of the most fundamental controls: a turntable. Our goal is to implement the turntable in a way that has never been done. Using a stacked-gear mechanism, we are able to put one disk on top of another on top of another. The user will theoretically be able to hook up the O^3 to any editable parameter allowing fluid, tactile, and precise control for use in sampling, ambient, or whatever the imagination creates. We hope to continue our work on the controller into the next quarter and will strive to improve the product over time.
Sound of Sirens
Gina Collecchia, Kevin McElroy, Dan Somen
“Siren Organ” is an electro-mechanical instrument consisting of compressed air and motor-driven disks with evenly spaced perforations. Three different controllers were designed, each with a dedicated disk (the siren). These controllers contain a network of air tubes to direct air flow from a compressor to individual rings on the sirens. These rings have different numbers of equally spaced holes to create a fundamental frequency, and varying radii of the holes to create harmonics.
The motor speed can be controlled by a fader, creating frequency sweeps that are classic to the siren sound. A custom manifold of valves, buttons, and air pathways as well as ball valves and blow guns control the pressure of the compressed air. Hence, the performer can control volume in addition to pitch. A master valve connects to each controller and splits into 4 hoses + valves, to provide an upper limit of the possible pressure. The hose leading from the compressor is also split into 3 channels, feeding each controller.
This is an extension of a design by Bart Hopkin from the papers “”Sirens, Part One”” (Vol. 12, #4, pp. 13-18, June 1997) and “”Sirens Part Two”” (Vol. 13, #1, pp. 19-22, Sept. 1997), both in Experimental Musical Instruments.
Micah Arvey, Rooney Pitchford, Zach Saraf
The Cyber Bully is a digital guitar effects pedal that utilizes a USB trackball for effect control rather than a conventional on/off toggle switch, ideal for experimenting with wacky and practical guitar sounds on the fly. The trackball controls four effects; velocity in the positive y direction and negative y direction correlate to the strength of one effect each, while velocity in each x direction correlates to an effect, combining four effects into one setting. A diagonal spin results in a mix of the two effects based on the spin angle. Two switches allow for toggling through effect settings, one allows for bypass, and another implements an auxiliary, instantaneous effect specific to each effect setting. One additional knob controls total volume, and two other knobs set a default level for each effect when the ball is still. Effect processing is run through Pure Data.
Emily Graber, David Grunzwieg, Michael Mendoza, Kunal Datta
The Spaceball was designed by Music 250a students Emily Graber, Kunal Datta, Mike Mendoza and David Grunzweig. The device is intended to simplify 3D panning on large, multichannel sound systems. In the past, any type of panning for multichannel sound systems was done in software before the performance. Our goal was to design a logical interface to help users make the most of their sound system during a performance. The buttons moving around the surface of the Spaceball’s dome represent the perceived origin of the sound in space of a specific track, with the top of the dome representing the point directly above the user. The faders allow the user to control the volume of the each individual track.
Brie Bunge, Sophia Westwood
We built a musical instrument in the form of a giant kaleidoscope. Two bike wheels, filled with colorful gels, spin to control the bass line and the melody. Light streams the gels, reflecting in the kaleidoscope to produce brilliant visual effects to accompany the piece. We generate the harmony and melody via Hall effect sensors that detect magnets on the bike wheels and pass the input into a machine learning algorithm.
This project is actually a preview for Brandon Cheung’s Personal Statement in the Product Design program so there is very little that we can say about it.
Elliot Kermit-Canfied, Pablo Castellanos, Cooper Newby, Justin Li
Sonic Droplet is an self-contained interactive audio-visual sculpture comprised of water activated sensors mounted on a suspended, internally lit cube that glows in response to human interaction with the device. Meshing visual art with sound synthesis when the water sensors are activated, Sonic Droplet generates beautiful music inspired by the sounds of water. Sonic Droplet will be an art installation at Stanford that invites its viewers to wield streams of water to collaboratively create a symphony of sound and light.
Older Projects can be found at the following sites: