Difference between revisions of "MakerFaire"

From CCRMA Wiki
Jump to: navigation, search
(Introduction)
(Leap the Dips)
 
(84 intermediate revisions by 6 users not shown)
Line 1: Line 1:
[[Image:Kalichord.jpg]]
+
[[File:Granuleggs.jpg|750px]]
 +
 
 +
 
  
 
==Introduction==
 
==Introduction==
 
The [http://ccrma.stanford.edu Center for Computer Research in Music and Acoustics] (CCRMA -- pronounced "karma") is an interdisciplinary center at Stanford University dedicated to artistic and technical innovation at the intersection of music and technology. We are a place where musicians, engineers, computer scientists, designers, and researchers in HCI and psychology get together to develop technologies and make art. In recent years, the question of how we interact physically with electronic music technologies has fostered a growing new area of research that we call Physical Interaction Design for Music. We emphasize practice-based research, using DIY physical prototying with low-cost and open source tools to develop new ways of making and interacting with sound. At the Maker Faire, we will demonstrate the low-cost hardware prototyping kits and our customized open source Linux software distribution that we use to develop new sonic interactions, as well as some exciting projects that have been developed using these tools. Below you will find photos and descriptions of the projects and tools we will demonstrate.
 
The [http://ccrma.stanford.edu Center for Computer Research in Music and Acoustics] (CCRMA -- pronounced "karma") is an interdisciplinary center at Stanford University dedicated to artistic and technical innovation at the intersection of music and technology. We are a place where musicians, engineers, computer scientists, designers, and researchers in HCI and psychology get together to develop technologies and make art. In recent years, the question of how we interact physically with electronic music technologies has fostered a growing new area of research that we call Physical Interaction Design for Music. We emphasize practice-based research, using DIY physical prototying with low-cost and open source tools to develop new ways of making and interacting with sound. At the Maker Faire, we will demonstrate the low-cost hardware prototyping kits and our customized open source Linux software distribution that we use to develop new sonic interactions, as well as some exciting projects that have been developed using these tools. Below you will find photos and descriptions of the projects and tools we will demonstrate.
  
Maker Faire website: [http://makerfaire.com]
 
  
==Magnjo - The Magnetically Augmented Banjo==
 
[[Image:Magnjo.jpg]]
 
Magnjo is a fretless banjo that has magnets under its fingerboard to provide haptic feedback to a performer. The magnets are oriented in harmonic locations (similarly to frets) to inform the performer of tonal locations along the string. In order to feel the magnets, the performer must wear finger wrappings with iron fabric in them. Thus the performer's fingers are informed of the locations along the fingerboard corresponding to notes in the chromatic scale.
 
  
  
==Haptic Drum ==
+
==The Blade Axe==
[[Image:HapticDrum.jpg]]
+
Romain Michon
 +
 
 +
The BladeAxe is an iPad-based musical instrument leveraging the concepts of “augmented mobile device” and “hybrid physical model controller.” By being almost fully standalone, it can be used easily on stage in the frame of a live performance by simply plugging it to a traditional guitar amplifier or to any sound system. Its acoustical plucking system provides the performer with an extended expressive potential compared to a standard controller
 +
 
 +
[[File:BladeAx2016.jpg|500px]]
 +
 
 +
 
 +
 
 +
==Granuleggs ==
 +
Alison Rush, David Grunzweig, and Trijeet Mukhopadhyay
 +
 
 +
The Granuleggs is a new music controller for granular synthesis which allows a musician to explore the textural potential of their samples in a unique and intuitive way, with a focus on creating large textures instead of distinct notes. Each controller is egg shaped, designed to fit the curve of your palm as you gyrate the eggs and tease your fingers to find yourself the perfect soundscape.
 +
 
 +
[[File:Granuleggs.jpg|500px]]
 +
 
 +
 
 +
==BelugaBeats==
 +
Jack Atherton
 +
 
 +
BelugaBeats is a whale-based step sequencer. You can add whales to 8 rows of a grid, and when a wave washes over them, they will sound their blowholes and play their notes. Changing a whale's size alters the pitch it sings. Occasionally, a whale will get distracted by a fish and play its note while underwater. Unfortunately, there is nothing you can do about this.
 +
 
 +
[[File:Belugabeats.png|500px]]
 +
 
 +
==Chorest==
 +
Jack Atherton
 +
 
 +
Welcome to your personal Chorest! Walk around, plant seeds, grow trees, and hear the wind in the air! Look down from a bird's eye view, or move through the trees on the chorest floor. When you breathe on your trees, they'll play a chord for you. Or, try singing to them! Trees need a noisy sound to grow -- try stroking your microphone! As you grow your trees, their sound will mature. Don’t forget, it’s always possible to plant new seeds and start anew. Occasionally, you may see a ghost from the past. Please, do not be alarmed.
 +
 
 +
[[File:Chorest.png|500px]]
 +
 
 +
 
 +
==Leap the Dips==
 +
Jack Atherton
 +
 
 +
This rolling ball sculpture invites participants to test their skill at "leaping the dips" on a copper model of the world's oldest operating roller coaster.  The project's aesthetic draws from a practice certainly much older than the roller coaster -- teenage rebellion, and the ensuing adult panic over the activities of "kids these days."  Marbles roll over tracks and supports that are fashioned out of soldered copper wire.  The tracks feature dips that cause the marbles to lift off the track and crash back down, as was possible in early roller coasters without up-stop wheels on the underside of the track.  Take care in the placement of your marble not to cause the marbles to completely fly off the track!  The dips are fitted with sensors that drive an algorithm in Max/MSP for giving aural feedback and a cultural experience to the users.
 +
 
 +
[[File:LeapTheDipsPicture.jpg|500px]]
 +
 
 +
== Music Maker==
 +
Sasha Leitman, John Granzow
 +
 
 +
Music Maker (https://ccrma.stanford.edu/musicmaker) is a free online resource that provides files for 3D printing woodwind and brass mouthpieces and tutorials for using those mouthpieces to learn about acoustics and music. The mouthpieces are designed to fit into standard plumbing and automobile parts that can be easily purchased at home improvement and automotive stores. The goal is to make a musical tool that can be used as simply as a set of building blocks but that aims to bridge the gap between our increasingly digital world of fabrication and the real-world materials that make up our daily lives.
 +
An increasing number of schools, libraries and community groups are purchasing 3D printers but many are still struggling to create engaging and relevant curriculum that ties into academic subjects. Making new musical instruments is a fantastic way to learn about acoustics, physics and mathematics.
 +
 
 +
 
 +
[[File:P1000118.jpg|500px]] [[File:TrumpetWithBell.jpg|500px]]
 +
 
 +
 
 +
==Cetacant==
 +
Alison Rush
 +
 
 +
The cetacant is a musical instrument inspired by whales and designed to accompany a performance of Vela 6911, a piece by Victor Gama. The cetacant emulates features of the cetacean vocal apparatus, using tubes and chambers full of air, water, and oil to produce and amplify sounds. The attached photo is of a prototype; the instrument's final form will resemble a suspended sphere, evoking the bubbles produced by a vocalizing whale, or our watery planet as seen from space.
 +
 
 +
[[File:Cetacant-diagram1.jpg|500px]]
 +
 
 +
 
 +
 
 +
==Mephisto==
 +
Romain Michon
 +
 
 +
Mephisto is a small battery powered open source Arduino based device. Up to five sensors can be connected to it using simple 1/8" stereo audio jacks. The output of each sensor is digitized and converted to OSC messages that can be streamed on a WIFI network to control any Faust generated app.
 +
The goal of Mephisto is to provide an easy way for musicians to interact with the different parameters of a Faust object or any other OSC compatible software during a live performance.
 +
As a "DIY" open source project, Mephisto only uses open source hardware (Arduino, etc.) and was designed to be easily built by anyone.
 +
 
 +
 
 +
[[File:Mephisto1.jpg|500px]]
 +
[[File:Mephisto2.jpg|500px]]
 +
 
 +
 
 +
== Hearing Polyphony - A Game and Experiment! ==
 +
Madeline Huberth
 +
 
 +
I work in the Neuromusic lab at CCRMA, whose goal on the whole is to investigate phenomena related to understanding music. Specifically, I've been doing work this past year in how our brain processes polyphony (hearing multiple melodies at once), and will present a game I created that uses the stimuli used in our experiment as a way of understanding the experiment. The experiment and our findings will also be on a poster that I can bring.
 +
 
 +
Our experiment shows that your brain can detect changes in polyphonic patterns automatically - how easy is it for you to do it consciously? Play and find out!
 +
 
 +
 
 +
[[File:Romain_cap.png|500px]]
 +
 
 +
 
 +
==CollideFx==
 +
Chet Gnegy
 +
 
 +
CollideFx is a real-time audio effects processor that integrates the physics of real objects into the parameter space of the signal chain. Much like in a traditional signal chain, a user can choose a series of effects and offer realtime control to their various parameters. In this work, we introduce a means of creating tree-like signal graphs that dynamically change their routing in response to position changes of the unit generators. The unit generators are easily controllable using the click and drag interface and respond using familiar physics, including conservation of linear and angular momentum and friction. With little difficulty, users can design interesting effects, or alternatively, can fling a unit generator into a cluster of several others to obtain more surprising results, letting the physics engine do the decision making.
 +
 
 +
[[Image:Chet.png|400px]]
 +
 
 +
 
 +
==The Processed Typewriter==
 +
Andrew Watts
 +
 
 +
Other than the human voice, musical instruments convey primarily
 +
abstraction through sound content. We interpret these sounds as music to
 +
varying degrees, but if one were to step away from the cultural
 +
associations, the noise would remain highly ambiguous. With a typewriter
 +
the sounds inherent in the machine's use also contain linguistic meaning.
 +
Having this added layer to work with, a composer could pair the text and
 +
the sounds in a multitude of ways, even utilizing the ambiguity of
 +
semantic meaning with the ill-defined meaning of typewriter sounds. For
 +
this project I am specifically thinking towards a performance in the late
 +
spring during a residency with famed soprano Tony Arnold. Rather than a
 +
typical accompaniment for a solo soprano piece, like as a piano, it would
 +
be much more interesting and musically fertile to have her singing lyrics
 +
which are actively being typed in the background. Not only is the text
 +
being transformed into sound through the vocal line, but also the
 +
hammering away of the typewriter. Furthermore, these sounds and the images
 +
of the text appearing on the page would be processed, enabling a wide
 +
range of articulations, imagery, references, and audio sculpting.
 +
 
 +
[[File:Typewriter1.jpg|500px]] [[File:Typewriter2.png|500px]]
 +
 
  
The word haptic comes from Greek and pertains to the sense of touch.
+
==String==
The haptic drum harnesses the power of force-feedback to assist drummers
+
Joshua Coronado
in playing parts that would otherwise be difficult or impossible. This
+
patent-pending device consists of a drum pad, a DSP, an amplifier, and a
+
woofer. Whenever a drumstick impacts the drum pad, the woofer
+
gives a small push in the upward direction, adding energy to the
+
bouncing drumstick.
+
  
== Kalichord==
+
String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.
[[Image:Kalichord.jpg]]
+
  
The Kalichord is a two-handed electro-acoustic instrument which acts as a controller for a physical string model. The user plucks virtual strings with one hand while playing bass lines with the other.
+
[[File:Strings.JPG|500px]]
 +
[[File:Strings_2.JPG|500px]]
  
==GRIP MAESTRO ==
+
==Tibetan Singing Prayer Wheel==
[[Image:Grip.jpg]]
+
Yoo-yoo Yeh
  
 +
Inspired by the traditional Tibetan prayer wheel and Tibetan singing bowl, we present the Tibetan Singing Prayer Wheel, a physical motion sensing controller that allows you to play virtual Tibetan singing bowls as well as processes your voice when you perform several gestures - spinning the wheel at different speeds, raising and lowering your arm, and tapping a button on the outside. A separate RF transmitter allows you to transition between the three distinct sound design layers: (1) a Faust-STK physical model of a Tibetan singing bowl, (2) a delayed and windowed voice processing layer, and (3) a novel modal reverb model of an actual Tibetan singing bowl, that takes the voice as input. The system is designed to be easy for anyone to pick up and improvise with - go ahead and try it!
  
The GRIP MAESTRO is a hand-exerciser that has been modified into a resistive one- or two-handed musical controller.  The GRIP MAESTRO Mach 1 uses magnets and Hall Effect Sensors to detect the position of each of the exerciser's six pad-springs and sends this information to ChucK to drive musical synthesis, or other sound manipulation.  The GRIP MAESTRO Mach 2 (presently in development) expands on this control structure by adding accelerometer data into the mix and by giving the player two GRIP MAESTOs, one for each hand.  The goal of this interface is to provide real force resistance as feedback to the performer and thereby establish an engaging relationship between the performer and his/her audience.
+
[[File:NIME_System_Architecture_v2.png|500px]]
  
== Turntable Gestures for Computer Mediated Performance==
+
==Mariah==
[[Image:DJs.jpg]]
+
Mathew Horton
  
Imagine if a computer can recognize turntablist moves, and depending
+
Mariah sonifies the "diva finger wave." Mariah is a letter of love to women like Whitney Houston, Christina Aguilera, and its namesake, Mariah Carey. Simple draw on the screen with your finger and sing a note. Instant riffs and trills just like the great divas of the 80's, 90's, and 00's!
on your scratch, respond musically!!!
+
  
Turntable Gestures for Computer Mediated Performance by Jason Sadural
+
But the amazing, unexpected outcome of creating Mariah was a really interesting feedback instrument. Mariah takes in audio, pitch shifts it, and plays it What you end up with at low levels of sounds is a "self-generating" feedback instrument that creates some really crazy effects.
and Mike Gao explores the exciting field of gesture recognition
+
applied to turntable techniques.
+
  
Utilizing ChucK, pD and Max MSP, turntable gestures used by two scratch
+
[[File:2015-02-10_11.58.33.png|500px]]
DJs are sent across the network via OSC, to a single computer far far away.
+
  
When this computer recognizes that a particular move has been executed
+
==Hill==
by one of the DJs,the gesture is decomposed, and the pertaining data is sent back across
+
Mathew Horton
the network. Movements on the turntable are used to:
+
  
- seed algorithmic composition
+
Hill is a software application for musical and visual accompaniment of spoken word poetry. It is inspired by the minimalist video game, Mountain, as well as Lauren Zuniga's poem, "World's Tallest Hill". Hill builds a scene through which the text of a poem can move. The view of the scene can shift, and depending on the particular place at which the scene is viewed, the accompanying audio is transformed in different ways. Hill allows users to "compose" an accompaniment for a poem by adhering to a sort of "score."
- seed algorithmic sound design
+
- control Ableton Live
+
- control algorithmic adjustment of groove / micro-timing
+
- facilitate scalar navigation of a tonal space
+
- control visuals
+
  
As an added novel bonus, a regular optical mouse has been converted
+
[[File:Hill.png|500px]]
into an infrared turntable tracker, providing an inexpensive DIY
+
interface for turntable gesture capture.
+
  
==JSASSynth - Modular Audio Synthesis for the Web==
 
JSASSynth is a graphical programming environment, similar to Max/MSP or PD, that is written for the web.  It uses HTML, CSS and Javascript for its user interface and Flash for its audio synthesis engine.  This means that it is a portable solution for modular audio synthesis, since most of the web browsing public already uses these features.  This project is an example of integrating synthesized audio into a webpage without depending on Flash for the user interface.  By using Flash's ExternalInterface API, JSASSynth can create and control a modular synthesizer that is playing inside of a hidden Flash animation.
 
  
 +
==Tower of Power==
 +
Graham Davis, Connor Kelley
  
==Catch Your Breath==
+
Tower of Power (ToP for short) is an interactive tower of wood that generates sound and sweet LED's. Inspired by the Hunchback of Notre Dame and 1970s funk, ToP is the auditory column for our generation. Tact is a project designed to make sound design and beat construction more intuitive. The instrument is a glove mounted with contact microphones that allows the wearer to record, transform and perform natural sounds at the touch of a finger. A wireless iPad interface provides the wearer with sound-shaping controls, playback effects and glove feedback. Amplify your interaction with the world via tactile sampling and contact playback with Tact. String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.
Catch Your Breath is an installation version of an interactive auditory bio-feedback project currently underway with the Stanford medical school, which is designed to reduce respiratory irregularity in patients undergoing 4D CT scans for oncological diagnosis. It is a potential means to reduce motion-induced distortion in CT and MRI images. The installation version consists of a pendant donning a fiduciary marker. This marker is detected by a webcam and the motion of the subject's breathing is tracked and interpreted as real-time variable tempo adjustment to a stored musical file. The subject can then adjust his/her breathing to synchronize with a separate music track consisting of an accompaniment to the subject's music. When the breathing is regular and at the desired tempo the audible result sounds synchronous and harmonious.
+
  
==Software Tools==
+
[[File:Tower_of_power.png|500px]]
Planet CCRMA at Home is a collection of open source programs that you can add to a computer running Fedora Linux to transform it into an audio/multi-media workstation with a low-latency kernel, current audio drivers and a nice set of music, midi, audio and video applications (with an emphasis on real-time performance). It replicates most of the Linux environment we have been using for years here at CCRMA for our daily work in audio and computer music production and research. Planet CCRMA is easy to install and maintain, and can be upgraded from our repository over the web. Bootable CD and DVD install images are also available.  This software is free.
+
[[File:Tower_of_power2.png|500px]]
  
[http://ccrma.stanford.edu/planetccrma/software http://ccrma.stanford.edu/planetccrma/software]
+
==Sonic Anxiety==
 +
Victoria Grace, Joel Chapman
  
 +
Sonic Anxiety is an ironic twist on performance anxiety, where the performance is the sound of my anxiety while locked in a cage. Sensors track my breathing to control the harmony and timbre while my pulse sets the pace and drum rhythms of the piece.
  
[[Image:Ardour_sm.png]]
 
  
Ardour -  Multitrack Sound Editor
+
[[File:Cage.png|500px]]
  
 +
==lovelyStepSequencer==
 +
Micah Arvey
  
[[Image:
+
3 dimensional step sequencer.
[[Image:Hydrogen_sm.png]]]]
+
  
Hydrogen - Drum Sequencer
+
[[File:BSsWorking.png|500px]]
  
 +
==Velokeys==
 +
Austin Whittier
  
 +
Velokeys is a velocity-sensitive QWERTY keyboard for desktop jamming. Millions of people spend every day training their brains with a QWERTY key layout – at work, at school, and at home. This project is meant to meld the expressivity
  
[[Image:Pd-jack-jaaa_sm.png]]
+
[[File:Qwerty.png|500px]]
  
Pd, Jack and Jaaa - Real-time audio tools
 
  
==Hardware Tools==
+
== Busk Box ==
In our [http://ccrma.stanford.edu/courses/250a/ courses], we use a prototyping kit based on Atmel AVR microcontrollers, with Pascal Stang's [http://hubbard.engr.scu.edu/embedded/avr/boards/index.html#avrminiv40 AVRmini] at the core. To the AVRmini, we attach an I2C LCD display, solderless breadboard strips, a loudspeaker and sometimes a MIDI jack. In student lab exercises and for prototyping, we hook up sensor circuits on the breadboard and send control signals to a Linux PC over USB, serial, MIDI or Ethernet in order to control open source real-time sound synthesis and processing software. These prototypes are then often built into larger-scale music and interactive sound art projects like the ones below that we will demonstrate at the Maker Faire.
+
Sasha Leitman
  
[[Image:Avrboard.jpg]]
 
  
 +
The Busk Box is a street performance system that combines the traditions of wandering street performers and musicians with the modern technologies.  Inside of a 1911 wooden trunk, 2 6" speakers, 1 10" subwoofer, 2 class-T amplifiers and a portable mixer are all powered by lithium-ion batteries.  In addition, the box is supported by folding wheels and legs which enable the box to be set up and torn down in less than 3 minutes.  This platform was designed to bring experimental and electronic music to the San Francisco Fisherman's Wharf district. 
  
  
[[Category:PID]]
+
[[Image:BuskBox.jpg|400px]]
[[Category:Projects]]
+

Latest revision as of 22:40, 4 May 2016

Granuleggs.jpg


Introduction

The Center for Computer Research in Music and Acoustics (CCRMA -- pronounced "karma") is an interdisciplinary center at Stanford University dedicated to artistic and technical innovation at the intersection of music and technology. We are a place where musicians, engineers, computer scientists, designers, and researchers in HCI and psychology get together to develop technologies and make art. In recent years, the question of how we interact physically with electronic music technologies has fostered a growing new area of research that we call Physical Interaction Design for Music. We emphasize practice-based research, using DIY physical prototying with low-cost and open source tools to develop new ways of making and interacting with sound. At the Maker Faire, we will demonstrate the low-cost hardware prototyping kits and our customized open source Linux software distribution that we use to develop new sonic interactions, as well as some exciting projects that have been developed using these tools. Below you will find photos and descriptions of the projects and tools we will demonstrate.



The Blade Axe

Romain Michon

The BladeAxe is an iPad-based musical instrument leveraging the concepts of “augmented mobile device” and “hybrid physical model controller.” By being almost fully standalone, it can be used easily on stage in the frame of a live performance by simply plugging it to a traditional guitar amplifier or to any sound system. Its acoustical plucking system provides the performer with an extended expressive potential compared to a standard controller

BladeAx2016.jpg


Granuleggs

Alison Rush, David Grunzweig, and Trijeet Mukhopadhyay

The Granuleggs is a new music controller for granular synthesis which allows a musician to explore the textural potential of their samples in a unique and intuitive way, with a focus on creating large textures instead of distinct notes. Each controller is egg shaped, designed to fit the curve of your palm as you gyrate the eggs and tease your fingers to find yourself the perfect soundscape.

Granuleggs.jpg


BelugaBeats

Jack Atherton

BelugaBeats is a whale-based step sequencer. You can add whales to 8 rows of a grid, and when a wave washes over them, they will sound their blowholes and play their notes. Changing a whale's size alters the pitch it sings. Occasionally, a whale will get distracted by a fish and play its note while underwater. Unfortunately, there is nothing you can do about this.

Belugabeats.png

Chorest

Jack Atherton

Welcome to your personal Chorest! Walk around, plant seeds, grow trees, and hear the wind in the air! Look down from a bird's eye view, or move through the trees on the chorest floor. When you breathe on your trees, they'll play a chord for you. Or, try singing to them! Trees need a noisy sound to grow -- try stroking your microphone! As you grow your trees, their sound will mature. Don’t forget, it’s always possible to plant new seeds and start anew. Occasionally, you may see a ghost from the past. Please, do not be alarmed.

Chorest.png


Leap the Dips

Jack Atherton

This rolling ball sculpture invites participants to test their skill at "leaping the dips" on a copper model of the world's oldest operating roller coaster. The project's aesthetic draws from a practice certainly much older than the roller coaster -- teenage rebellion, and the ensuing adult panic over the activities of "kids these days." Marbles roll over tracks and supports that are fashioned out of soldered copper wire. The tracks feature dips that cause the marbles to lift off the track and crash back down, as was possible in early roller coasters without up-stop wheels on the underside of the track. Take care in the placement of your marble not to cause the marbles to completely fly off the track! The dips are fitted with sensors that drive an algorithm in Max/MSP for giving aural feedback and a cultural experience to the users.

LeapTheDipsPicture.jpg

Music Maker

Sasha Leitman, John Granzow

Music Maker (https://ccrma.stanford.edu/musicmaker) is a free online resource that provides files for 3D printing woodwind and brass mouthpieces and tutorials for using those mouthpieces to learn about acoustics and music. The mouthpieces are designed to fit into standard plumbing and automobile parts that can be easily purchased at home improvement and automotive stores. The goal is to make a musical tool that can be used as simply as a set of building blocks but that aims to bridge the gap between our increasingly digital world of fabrication and the real-world materials that make up our daily lives. An increasing number of schools, libraries and community groups are purchasing 3D printers but many are still struggling to create engaging and relevant curriculum that ties into academic subjects. Making new musical instruments is a fantastic way to learn about acoustics, physics and mathematics.


P1000118.jpg TrumpetWithBell.jpg


Cetacant

Alison Rush

The cetacant is a musical instrument inspired by whales and designed to accompany a performance of Vela 6911, a piece by Victor Gama. The cetacant emulates features of the cetacean vocal apparatus, using tubes and chambers full of air, water, and oil to produce and amplify sounds. The attached photo is of a prototype; the instrument's final form will resemble a suspended sphere, evoking the bubbles produced by a vocalizing whale, or our watery planet as seen from space.

Cetacant-diagram1.jpg


Mephisto

Romain Michon

Mephisto is a small battery powered open source Arduino based device. Up to five sensors can be connected to it using simple 1/8" stereo audio jacks. The output of each sensor is digitized and converted to OSC messages that can be streamed on a WIFI network to control any Faust generated app. The goal of Mephisto is to provide an easy way for musicians to interact with the different parameters of a Faust object or any other OSC compatible software during a live performance. As a "DIY" open source project, Mephisto only uses open source hardware (Arduino, etc.) and was designed to be easily built by anyone.


Mephisto1.jpg Mephisto2.jpg


Hearing Polyphony - A Game and Experiment!

Madeline Huberth

I work in the Neuromusic lab at CCRMA, whose goal on the whole is to investigate phenomena related to understanding music. Specifically, I've been doing work this past year in how our brain processes polyphony (hearing multiple melodies at once), and will present a game I created that uses the stimuli used in our experiment as a way of understanding the experiment. The experiment and our findings will also be on a poster that I can bring.

Our experiment shows that your brain can detect changes in polyphonic patterns automatically - how easy is it for you to do it consciously? Play and find out!


Romain cap.png


CollideFx

Chet Gnegy

CollideFx is a real-time audio effects processor that integrates the physics of real objects into the parameter space of the signal chain. Much like in a traditional signal chain, a user can choose a series of effects and offer realtime control to their various parameters. In this work, we introduce a means of creating tree-like signal graphs that dynamically change their routing in response to position changes of the unit generators. The unit generators are easily controllable using the click and drag interface and respond using familiar physics, including conservation of linear and angular momentum and friction. With little difficulty, users can design interesting effects, or alternatively, can fling a unit generator into a cluster of several others to obtain more surprising results, letting the physics engine do the decision making.

Chet.png


The Processed Typewriter

Andrew Watts

Other than the human voice, musical instruments convey primarily abstraction through sound content. We interpret these sounds as music to varying degrees, but if one were to step away from the cultural associations, the noise would remain highly ambiguous. With a typewriter the sounds inherent in the machine's use also contain linguistic meaning. Having this added layer to work with, a composer could pair the text and the sounds in a multitude of ways, even utilizing the ambiguity of semantic meaning with the ill-defined meaning of typewriter sounds. For this project I am specifically thinking towards a performance in the late spring during a residency with famed soprano Tony Arnold. Rather than a typical accompaniment for a solo soprano piece, like as a piano, it would be much more interesting and musically fertile to have her singing lyrics which are actively being typed in the background. Not only is the text being transformed into sound through the vocal line, but also the hammering away of the typewriter. Furthermore, these sounds and the images of the text appearing on the page would be processed, enabling a wide range of articulations, imagery, references, and audio sculpting.

Typewriter1.jpg Typewriter2.png


String

Joshua Coronado

String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.

Strings.JPG Strings 2.JPG

Tibetan Singing Prayer Wheel

Yoo-yoo Yeh

Inspired by the traditional Tibetan prayer wheel and Tibetan singing bowl, we present the Tibetan Singing Prayer Wheel, a physical motion sensing controller that allows you to play virtual Tibetan singing bowls as well as processes your voice when you perform several gestures - spinning the wheel at different speeds, raising and lowering your arm, and tapping a button on the outside. A separate RF transmitter allows you to transition between the three distinct sound design layers: (1) a Faust-STK physical model of a Tibetan singing bowl, (2) a delayed and windowed voice processing layer, and (3) a novel modal reverb model of an actual Tibetan singing bowl, that takes the voice as input. The system is designed to be easy for anyone to pick up and improvise with - go ahead and try it!

NIME System Architecture v2.png

Mariah

Mathew Horton

Mariah sonifies the "diva finger wave." Mariah is a letter of love to women like Whitney Houston, Christina Aguilera, and its namesake, Mariah Carey. Simple draw on the screen with your finger and sing a note. Instant riffs and trills just like the great divas of the 80's, 90's, and 00's!

But the amazing, unexpected outcome of creating Mariah was a really interesting feedback instrument. Mariah takes in audio, pitch shifts it, and plays it What you end up with at low levels of sounds is a "self-generating" feedback instrument that creates some really crazy effects.

2015-02-10 11.58.33.png

Hill

Mathew Horton

Hill is a software application for musical and visual accompaniment of spoken word poetry. It is inspired by the minimalist video game, Mountain, as well as Lauren Zuniga's poem, "World's Tallest Hill". Hill builds a scene through which the text of a poem can move. The view of the scene can shift, and depending on the particular place at which the scene is viewed, the accompanying audio is transformed in different ways. Hill allows users to "compose" an accompaniment for a poem by adhering to a sort of "score."

Hill.png


Tower of Power

Graham Davis, Connor Kelley

Tower of Power (ToP for short) is an interactive tower of wood that generates sound and sweet LED's. Inspired by the Hunchback of Notre Dame and 1970s funk, ToP is the auditory column for our generation. Tact is a project designed to make sound design and beat construction more intuitive. The instrument is a glove mounted with contact microphones that allows the wearer to record, transform and perform natural sounds at the touch of a finger. A wireless iPad interface provides the wearer with sound-shaping controls, playback effects and glove feedback. Amplify your interaction with the world via tactile sampling and contact playback with Tact. String is controller used to generate waveforms, curves, and envelopes using a camera, coloured string, and Max/MSP. Users draw curves representing objects such as a filter envelope using coloured string. The coloured curve is then captured by a camera and deciphered into a digital curve to be rendered out to audio by Max/MSP.

Tower of power.png Tower of power2.png

Sonic Anxiety

Victoria Grace, Joel Chapman

Sonic Anxiety is an ironic twist on performance anxiety, where the performance is the sound of my anxiety while locked in a cage. Sensors track my breathing to control the harmony and timbre while my pulse sets the pace and drum rhythms of the piece.


Cage.png

lovelyStepSequencer

Micah Arvey

3 dimensional step sequencer.

500px

Velokeys

Austin Whittier

Velokeys is a velocity-sensitive QWERTY keyboard for desktop jamming. Millions of people spend every day training their brains with a QWERTY key layout – at work, at school, and at home. This project is meant to meld the expressivity

Qwerty.png


Busk Box

Sasha Leitman


The Busk Box is a street performance system that combines the traditions of wandering street performers and musicians with the modern technologies. Inside of a 1911 wooden trunk, 2 6" speakers, 1 10" subwoofer, 2 class-T amplifiers and a portable mixer are all powered by lithium-ion batteries. In addition, the box is supported by folding wheels and legs which enable the box to be set up and torn down in less than 3 minutes. This platform was designed to bring experimental and electronic music to the San Francisco Fisherman's Wharf district.


BuskBox.jpg