CCRMA Open House 2012
Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) invites you to visit our annual Open House on Friday, April 6th at CCRMA. The Open House will take place from noon – 5pm and then we will have a reception in the CCRMA courtyard from 5pm – 7pm.
The Open House is an excellent opportunity to see the wide range of interdisciplinary research and creative work being done in computer music, human-computer interaction, digital signal processing, psychoacoustics, and more.
12:00 - 5:00 Demos and posters throughout the building
12:00 - 1:00 Listening Room Demos
1:00 - 1:30 Tour of CCRMA Facilities
1:30 - 2:00 Tour of CCRMA Facilities
2:00 - 3:00 Lectures in the CCRMA Stage
3:00 - 4:00 Listening Room Demos
4:00 - 5:00 Performances in the CCRMA Stage
Location:
CCRMA is located in the Knoll (660 Lomita Ct) on Stanford campus.
Parking is available in the Tresidder Lots on Mayfield Street.
Directions and parking information can be found at:
https://ccrma.stanford.edu/about/directions
DEMO DESCRIPTIONS (alphabetical)
Sonic Distillery (Source Separation System)
Rob Bullen and Bjoern Erlach
berlach@ccrma.Stanford.edu, bullen.rob@gmail.com
A complete source separation system. The main goal is to do field recordings and to be able to separate sound sources arriving from different directions, without introducing severe artefacts to the separated signals. The system consists of a program written in matlab to perform the detection of sound sources, their separation and post filtering to improve the quality of the results, and a custom made microphone array. The approach especially aims to be suitable for musical applications and the creation of soundscapes, for which the requirements are somewhat different than for other common applications of source separation, like for example speech recognition.
Borderlands: An audiovisual interface for granular synthesis
Christopher Carlson
carlsocj@gmail.com
Borderlands is a new interface for composing and performing with granular synthesis. The software enables flexible, real- time improvisation and is designed to allow users to engage with sonic material on a fundamental level, breaking free of traditional paradigms for interaction with this technique. The user is envisioned as an organizer of sound, simultaneously assuming the roles of curator, performer, and listener. Both laptop and mobile versions are under development and will be demoed.
+synth
Lauchlan Casey
lauchlan@stanford.edu
New ways of interacting with additive synthesis for iOS. Includes filters, modulation effects, timbral morphing, and gesture control options. Influenced by lots of synths, particularly Daphne Oram's oramics machine, but also a range of contemporary software synths. Each element hopes to offer a bit of a twist on the usual. Prototyped with supercollider and touchOSC, implemented in C++.
Sonic Cube
Kevin Chau
khlchau@yahoo.com
A utility program implemented in c++ to analyze input sounds.
The Sinkapater
Jiffer Harriman
jiffer8@ccrma.Stanford.EDU
The Sinkapater is an untethered beat sequencer. By allowing different tracks to divide the beat arbitrarily complex polyrhythms can be created. By allowing loop at different loop lengths, patterns unfold over long periods of time. By visualizing beats as falling water drops, gain new perspective on these patterns.
TweetDreams
Jorge Herrera, Luke Dahl, Carr Wilkson
carrlane@ccrma.stanford
TweetDreams is a multimedia musical performance made from live Twitter data. During a performance tweets containing specific terms are retrieved from Twitter's servers, sonified into short melodies, and displayed graphically. The piece is created by three groups of users: the audience, the performers, and the world.
The audience is invited to tweet during performance with a special ""local search term"". Any tweets with this term are detected by our software and given special musical and graphical prominence.
The performers drive the software and shape the piece by selecting search terms and controlling various musical and graphical parameters.
The ""global search terms"" are used to bring in tweets from the rest of the world. During a performance, anyone tweeting anywhere in the world with one of these terms becomes a participant, and so TweetDreams becomes a public musical interaction that is simultaneously local and global.
hearHere
Jennifer Hsu
jhsu@ccrma.Stanford.EDU
An iPhone/iPod application that lets you to listen to the world around you as music.
Miles Ahead
Mayank Sanganeria
mayank.ot@gmail.com
Miles Ahead is an interactive improvisation system that allows you to sync up with any backing track that you like and start 'jamming' with the computer using MIDI instruments. The computer listens to what you played and trades 4's (or 8's or n's) with you, playing off of what you played and hence allows you to take your improvisational ideas to previously unexplored places.
Soundshape
Mayank Sanganeria
mayank.ot@gmail.com
Soundshape is an app for the iPad that allows you to create shapes and sounds by drawing and recording. You can move these shapes around, cut these shapes, loop them, scrub through them and hence make music using these shapes. Check online for an inventory of these sonic shapes and even add your own!
tulpasynth
Colin Sullivan
colinsul@ccrma.Stanford.EDU
"tulpasynth" is a collaborative music system that enables a group of people to spontaneously create together by manipulating a physics-based environment on a touchscreen interface. Each user uses her/his own touchscreen to interact with the entities in the environment and has the ability to "transport" objects to the other users. The client is implemented as an iPad app which is built on top of OpenGL and the Box2D physics engine. Sounds are synthesized from scratch on each device using The Synthesis Toolkit in C++ (STK). The Node.js server synchronizes each client over a socket connection. The system is titled “tulpasynth” in the spirit of creation without boundaries.
Sour Mash
Derek Tingle
derek.tingle@gmail.com
Sour Mash is a real-time sound mosiac-er that allows a user to fade between a source song and a re-synthesized version of that song. The re-synth version is generated by replacing each buffer of audio from a source song with the most similar audio buffer from audio from another set of songs, where similarity is the distance in an audio feature space.
Wilsynth
Michael J. Wilson
mwilson@alumni.caltech.edu
A software synthesizer I am writing using techniques I learned in the MA/MST program at CCRMA
LISTENING ROOM PRESENTATIONS (In Order of Appearance)
The Green Light
Cecilia Wu
wuxiaoci@stanford.edu
This is a spiritual music production with a visual art video. Cecilia composed, performed, recorded and produced the song by using the audio engineering skills that she learned from Music192B (Advanced Sound Recording Technology). She created the video by using Adobe After Effects that she learned from Music155 (Intermedia Workshop). At the same time, Cecilia used the open source software Ardour and ambisonics technology to spatialize 50 tracks of the music with 24 channels in the 3D listening room. Additionally, she invited several classmates and a lecturer to play different instruments for this piece and she really appreciated their collaboration.
Icons of Sound: Prokeimenon Auralization
Michael J. Wilson
mwilson@alumni.caltech.edu
A Byzantine chant was recorded on the CCRMA stage, and then processed with a software reverberator. The reverberator was synthesized from a room impulse response derived from a recording of a baloon pop in the Hagia Sophia in Istanbul.
A Sound Spatializer based on a New Implementation of VBAP
Hongchan Choi
hongchan@ccrma.Stanford.EDU
An implementation of vector-based amplitude panning (VBAP) for spatial display of sonified data is presented. Two techniques from computer graphics are adapted in order to predefine an optimum set of speaker triplets and perform the amplitude panning in real-time. Furthermore, the consideration of time delay from a virtual sound source to actual speakers is incorporated. Due to the geometrical nature of this procedure, the resulting system can be easily visualized by the graphic library such as OpenGL. A prototype is demonstrated that enables a user to compose a trajectory of sound in three dimensional space.
Listening Room Demo
Fernando Lopez-Lezcano
nando@ccrma.Stanford.EDU
The Listening Room is a sound spatialization theater with 23 channels of full 3D sound diffusion. It is used for composition, research, measurements, acoustic simulations and just plain music listening. The demo of the studio will show its capabilities and will briefly describe the technologies driving it.
LECTURES IN STAGE (In Order of Appearance)
Integrative Archaeoacoustics at Chavín de Huántar, Perú
Miriam Kolar
mkolar@stanford.edu
A study of ancient musical/sound-producing instruments is greatly enhanced by integrative archaeoacoustic research which examines the interrelationships among instrumental and acoustic environmental dynamics, tests their auditory perceptual implications, and seeks further archaeological contextualization. Since 2008, a CCRMA-based multidisciplinary research team has explored these ideas through an exemplary case study, the 3,000 year-old Andean Formative Period ceremonial center at Chavín de Huántar, Perú. Historically reputed as an oracle center, Chavín provides both site-provenienced aerophones, the marine shell trumpets known as "pututus", and well-preserved architecture whose acoustics can be tested, measured, and modeled with verification. We advance a methodology that explores and documents the sound production potential of artifact instruments, the measured acoustic dynamics of site spaces, and the experimentally-tested auditory perceptual effects of sound and space interaction. Analyses of these multifaceted data yield new forms of archaeological evidence to support hypotheses about the auditory sensory environment experienced by ritual participants in ancient Chavín.
PWGL - a Visual Language for Computer Assisted Composition
Mika Kuuskankare
mkuuskan@siba.fi
PWGL is a visual cross-platform music programming environment. It is designed for the applications of computer assisted composition, music theory and analysis, software synthesis, and music notation. Currently, PWGL is one of the three major Lisp-based composition environments, along with OpenMusic and Common Music environments. PWGL is distributed as freeware and it has been publicly available since the beginning of 2006. Since then, it has gained wide acceptance within the computer music community and it is being taught and used in several universities and institutions around the world. It runs currently on Macintosh OS X (10.4 or newer), and on Windows XP operating systems. The PWGL website is located at www.siba.fi/PWGL
From Schaeffer to *LOrks: an expanded definition of musical instrument in the context of laptop orchestras
Bruno Ruviaro
ruviaro@stanford.edu
Departing from Schaeffer's definition of musical instrument, this paper identifies three other relevant non-sonic aspects that may be useful in the context of laptop orchestras: presence (the body of the instrument and the human body as shaped by it), movement (the instrument and the human body in motion), and history (the historical repertoire and cultural surroundings attached to the instrument). In the context of the laptop orchestras of today (such as PLOrk, SLOrk, L2Ork, just to name a few), Schaeffer's ideas continue to offer interesting insights in instrument design; however, since laptop ensembles aspire to provide the audience with a meaningful non-acousmatic listening situation, there arises the need for a practical definition of musical instrument beyond a strictly acousmatic point of view. Two concrete examples of recent pieces performed by the Stanford ensemble are offered at the end to illustrate the discussion.
PERFORMANCES IN STAGE (In Order of Appearance)
Borderlands: An audiovisual interface for granular synthesis
Christopher Carlson
carlsocj@gmail.com
Borderlands is a new interface for composing and performing with granular synthesis. The software enables flexible, real- time improvisation and is designed to allow users to engage with sonic material on a fundamental level, breaking free of traditional paradigms for interaction with this technique. The user is envisioned as an organizer of sound, simultaneously assuming the roles of curator, performer, and listener. Both laptop and mobile versions are under development and will be demoed.
The Sinkapater
Jiffer Harriman
jiffer8@ccrma.Stanford.EDU
The Sinkapater is an untethered beat sequencer. By allowing different tracks to divide the beat arbitrarily complex polyrhythms can be created. By allowing loop at different loop lengths, patterns unfold over long periods of time. By visualizing beats as falling water drops, gain new perspective on these patterns.
Fractal Music
Kevin Chau
khlchau@yahoo.com
Fractal images are fascinating. They are so rich in details which are reproducible at all magnification levels and can easily lead us to the notion of infinity and eternity. Fractals appear everywhere in nature: from galaxies, landscapes, living things, to crystal formation; yet they can be governed by exceedingly simple mathematical equations. So how do fractals sound in music? Can we exploit fractals to the extent as nature has done so but for music? I will explore various fractals sonification schemes along with a video presentation of 3D fractal art and music.
Projections
Stephen Henderson
stevie248@gmail.com
Projections explores the emotional space that we navigate when we think of how others think of us. It is an intermedia dance/animation and audio piece. Using real life, personal experiences from interviews with friends and family as the basis for the text visuals and audio, projections opens the world of the inner thought process of reacting to what we feel is being projected upon us and how it is that we project onto others and ourselves.
LiquidScore
Hunter McCurry
p.hunter.mccurry@gmail.com
The project blurs the line between an interactive computer game and a musical performance. A musician interacts with an animated musical score that is displayed for both audience and performer. Real-time audio tracking enables the computer to track the player's progress and respond to choices made by the musician. Musical accompaniment and sound-design elements are created on the fly based on what has been recorded from the performer's audio stream.