Jump to Navigation

Main menu

  • Login
Home

Secondary menu

  • [Room Booking]
  • [Wiki]
  • [Webmail]

CCRMA Open House 2016




On Friday April 22, we invite you to come see what we've been cooking up at the Knoll!

Join us for lectures, hands-on demonstrations, posters, installations, and musical performances of recent CCRMA research including music in virtual reality, internet reverb, neuroscience of music and of narrative engagement, wave digital filter simulation of classic analog audio equipment, programming languages for music and sound synthesis, digital signal processing, data-driven research in music cognition, and a musical instrument petting zoo. CCRMA's founder John Chowning will be our keynote speaker, interviewed live in Berlin by Holly Herndon.



Schedule overview


10-12: Presentations throughout the Knoll
11-12: Keynote Event: John Chowning interviewed by Holly Herndon, live from Berlin. Stage (317)
12-2: Food trucks (An the Go and Seoulful) and music in the Knoll Courtyard
2-5: Presentations and open classes in the Knoll


We will continue to update the detailed schedule here, even during the event. Check back often!


Facilities on display:

Neuromusic Lab
Listening Room
Max Lab and garage (maker / physical computing / fabrication)
Hands-on CCRMA history museum
JOS Lab
CCRMA Stage
Recording Studio (afternoon)

Exhibits (lectures, performances, demos, posters, installations...) - Alphabetical by author

Icons of Sound: Acoustic Model of the Hagia Sophia

Jonathan Abel, Bissera Pentcheva, Fernando López-Lezcano, Miriam Kolar, Gina Collecchia, Mike Wilson, Travis Skare, Nick Bryan, Sean Coffin, Yoomi Hur, Kurt Werner, Elliot Kermit-Canfield, Ethan Geller, Tim O'Brien, John Granzow, et al.

The Icons of Sound project focuses on the interior of Hagia Sophia built by emperor Justinian in 532-537, and employs visual, textual, and musicological research, video, balloon pops, the building of architectural and acoustic models, auralizations, and the recording and performance of Byzantine chant. The Great Church of Constantinople, present day Istanbul, has an extraordinarily large nave spreading over 70 meters in length; it is surrounded by colonnaded aisles and galleries, with marble covering the floor and walls. The naos of Hagia Sophia is centralized, crowned by a dome glittering in gold mosaics and rising 56 meters above the ground, and generating a reverberation time of more than 10 seconds.  We have created a new method using balloon pops to discover the acoustic parameters of the space and build a computational model. This model enabled us to offer contemporary listeners the experience of listening in Hagia Sophia. In collaboration with Cappella Romana, the leading North American chamber choir dedicated to the performance of Early Music including Byzantine chant, we have produced recordings and live chant, including a selections from the "Sung Office" of Hagia Sophia: In the Listening Room, you can hear a version of the prokeimenon (gradual) for the Feast of St. Basil celebrated on January 1, performed in a virtual Hagia Sophia.

Poster. Location: Listening Room (128), Time: all day

 

Performances of Student Compositions "New Audio, Old Gestures"

Mark Applebaum and class

Mark Applebaum performs new versions of his piece Aphasia for hand gestures synchronized to two-channel audio. Applebaum's original 9-minute paroxysm comprises a mercurial spray of absurd hand gestures—a kind of nonsense sign language—carefully choreographed and executed in strict coordination with a kaleidoscopic soundscape of mangled audio samples derived from human voice. For this occasion students of the Composition for Electronic Musicians class have recomposed the audio for a 21-second excerpt of the piece to which Applebaum will perform the original choreography. It is an exercise called "New Audio, Old Gestures." (In the subsequent week the students will undertake the reverse: an exercise called "Old Audio, New Gestures" in which they adopt the original audio excerpt and perform their own newly-composed gestures to it.)

21-second student compositions plus interpreted dance. Location: Classroom (217) , Time: 3:30p-4:20p

 

The Inter-String Time Delay Zither

Jack Atherton

The Inter-String Time Delay Zither is a plucked string instrument that changes its sound based on how fast you pluck it. Its strings are arranged into note pairs (groups of two strings that are tuned to the same note), and using a system of pickups under two bridges, it detects the difference in pluck time for the left and right strings of each note pair. The strings' vibrations are picked up with piezos and routed through some audio processing in Max/MSP, where the inter-string time delay is used to drive audio effects like beating and distortion. The interaction of manipulating sound via the speed at which you pluck each note thus affords an additional level of control beyond those present in a traditional zither.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

waVR: Experience the Ocean in Virtual Reality

Jack Atherton

waVR (Wave VR) is an all-new musical interactive VR experience. Using a GameTrak, you can push waves in any direction to make your own music. Listen to the sounds of the ocean. Play a variety of chords by making waves. Push your hands in the same direction and see a horizontal wave shoot off in that direction. Spread your hands at a right angle and watch a ring wave expand in every direction. Tread water to subtly alter the sound of your wave chords. Raise your arms to go underwater and listen to the waves crash above you. Notice the extra noises that water makes when it completely surrounds you. Hear the sounds of whales all around you, and send waves toward them to catch their attention. Swim back up above water and hear the demonic cries of seagulls. What?

Demo - VR. Location: VR Wonderland (211), Time: all day

 

Monitoring Dreams

Constantin Basica

This project examines recurring dreams of its author in order to develop a treatment for his stage fright.

Installation. Location: Recording Studio (124), Time: morning

 

Sporth: an audio language

Paul Batchelor

This lecture/demo will showcase Sporth: a stack-based domain specific language for synthesizing audio.

Lecture / Demo. Location: Stage (317), Time: 10:20a

 

Continua-key

Brian Bolze, Ned Danyliw, Griffin Stoller

We are taking the familiar skeleton of a traditional MIDI keyboard and completely rethinking how sound is generated from a key press. Instead of simply starting and stopping a note (noteOn and noteOff in MIDI), our keyboard provides a continuous position and velocity reading of each key which opens up a whole new spectrum of live sound shaping. The key physical interaction we are trying to capture is the tiny, minute movements of the musicians fingers on the keys themselves, without having to turn any knobs or sliders on a panel or screen.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

There Be Treasure/Cookies

Eoin Callery

 

Installation. Location: Room 101 (just outside), Time: all day

 

Internet Reverb

Chris Chafe

Until now, "network room" has been an enclosed space only in metaphor, where for the connected inhabitants, the place is a network location to gather and interact (chat rooms, game rooms, etc.). We describe an actual network room for musical interactions in which plausible, room-like reverberation is spawned between endpoints of our network audio software. Each new user becomes a node in a mesh and all sounds entering the mesh are reverberated by the mesh. The medium in which these echoes exist is not, however, air but the Internet and the acoustical properties differ because of the medium's distinct "physical laws."

Demo. Location: 301 and Studio E (320), Time: all day

 

The Ear Tone Toolbox for Auditory Distortion Product Synthesis

Alex Chechile

The Ear Tone Toolbox is a collection of open-source unit generators for the production of auditory distortion product synthesis. Auditory distortion products are sounds generated primarily on the basilar membrane in the cochlea in response to specific pure-tone frequency combinations. The frequencies of the distortion products are separate from the provoking stimulus tones and are not present in the acoustic space. The first release of the Ear Tone Toolbox is a collection of six externals for Max, VST instruments, and patches for the hardware OWL synthesizer, all of which produce various combinations of distortion products and acoustic primary tones.

Lecture / Demo. Location: Stage (317), Time: 3:00p

 

On the Sensations of Tone VIII

Alex Chechile

On the Sensations of Tone VIII provokes auditory distortion products by acoustic and electronic means. During the compositional process, a spectral analysis of the crotales revealed certain notes containing both stimulus frequencies in the proper ratio for eliciting the phenomenon. When two notes are sounded, as many as four distortion products can be heard (and so on). Additionally, pitches that appear exclusively in the acoustic space later return as material generated in the ear alone. Recorded live on May 27, 2015 at the Bing Concert Hall with Loren Mach on percussion and Alex Chechile on electronics. The presented version is a stereo mix of the original 24.7 speaker configuration.

Musical Composition. Location: Stage (317), Time: 3:20p

 

The Ear Tone Toolbox Demo

Alex Chechile

A demonstration showcasing a variety of the unit generators in the Ear Tone Toolbox for the production of auditory distortion product synthesis.

Demo - pettable instrument. Location: MaxLab (201), Time: part of day

 

John Chowning interview in Berlin

John Chowning and Holly Herndon

Composer and researcher John Chowning discovered FM synthesis, co-founded CCRMA, and was our first Director. As part of the event Technosphärenklänge in Berlin's CTM festival, CCRMA doctoral student and electronic music star Holly Herndon will interview Professor Chowning about his life and work. We at CCRMA will watch the event through streaming audio and video.

Interview. Location: Stage (317), Time: 11a-noon

 

Data science and music: what musical features can predict engagement?

Perry Cook

Social music platforms (such as SMule) now allow millions of people to share in creative musical experiences any time they like, and collaborations between “groups” of singers/players who have never met are commonplace. Of particular interest are these connections, and the topic of engagement in general. Are the links between, and persistence of users purely random? Or are they social in the same way as platforms such as Facebook, Twitter, Instagram, etc. are? Since music and performance is involved, it makes sense to think also about the performances and performers themselves. This open "thought exercise" will look briefly at some performers in the Sing! Karaoke App, asking “what makes these singers engaging?" (Part of Jeff Smith's class Musical Engagement)

Lecture. Location: Stage (317), Time: 4:30p-5p

 

Play Testing of Musical Games

Poppy Crum and class

Play testing of musical games created for this week's assignment: physical interaction with a Kinect, LEAP Motion, or VR headset to control a virtual environment. (Part of Poppy Crum's class Neuroplasticity and Musical Gaming. Come and go as you please.

Demo. Location: Classroom (217), Time: 10:30a-11:20a

 

Can Neural Games Be a Source of Prescriptive Behavioral Benefits or Are They Just for Fun?

Poppy Crum and class

Class panel discussion/debate: Can neural games be a source of prescriptive behavioral benefits or are they just for fun? Discussion of recent FTC/FDA regulatory considerations. (Part of Poppy Crum's class Neuroplasticity and Musical Gaming.) 

Panel discussion. Location: Classroom (217), Time: 11:30a-12:20p

 

Lab demo: EEG capping and Data Visualization, Stimuli Jukebox

Takako Fujioka, Madeline Huberth, Irán Román, Emily Graber, Shu-Yu Lin

 

Lab demo. Location: Neuromusic (103), Time: all day

 

Modeling Expert Musical Insights Through Data: The Max Martin Coefficient

Nick Gang, Blair Kaneshiro

Though relatively anonymous, Swedish songwriter/producer Max Martin is responsible for more Billboard number 1 songs than any non-Beatle. In this MIR case study, the metadata and acoustic features of songs in Martin's catalog are analyzed in an attempt to characterize his work through data. Martin's timbral and rhythmic diversity both within and between songs is detailed, and changes to these characteristics over his career are discussed. Pitch and key analyses show a tendency towards songs with modal ambiguity. Variance of all features is presented by principal components analysis, and proposals for comparisons of Martin's work to others' are suggested.

Poster. Location: Ballroom (216), Time: all day

 

Project Oracle

Ethan Geller, Aidan Meacham

Project Oracle is a virtual environment in which players use motion controllers to navigate and explore a world that is invisible to the naked eye, using a camera and microphone in one hand and a screen/speaker in the other.

Demo - VR. Location: VR Wonderland (211), Time: all day

 

A SoundHound For The Sounds of Hounds

Ethan Geller, Matt Horton, Robert Colcord

A machine learning project which used fast algorithms for finding an animal's characteristic vocalization out of large (typically 30-120 minute) field recordings in which animal vocalizations are sparse.

Poster. Location: PhD desk area (207), Time: all day

 

Equalization Matching of Speech Recordings in Real-world Environments

François G. Germain, Gautham Mysore, Takako Fujioka

The recent years have seen the explosion of the amount of recorded speech used for a variety of applications such as voiceovers, etc... However, those recordings are often obtained in non-ideal conditions, such as reverberant or noisy environments. As such, different recordings often present different timbral qualities. In the presentation, I introduce an algorithm aiming at equalizing individual speech segments so that they sound like they were recorded in the same environment. This algorithm leverages speech enhancement methods in order to derive the competing timbral matching conditions between speech and background, and apply the appropriate equalizing filters to each source. Additionally, the resulting matched sequence generally presents low level of artifacts after the final remixing of the sources.

Poster. Location: DSP Wonderland (305), Time: all day

 

Design Principles for Lumped Model Discretisation Using Möbius Transforms

François G. Germain, Kurt J. Werner

Computational modelling of audio systems commonly involves discretising lumped models. The properties of common discretisation schemes are typically derived through analysis of how the imaginary axis on the Laplace-transform s-plane maps onto the Z- transform z-plane and the implied stability regions. This analysis ignores some important considerations regarding the mapping of individual poles, in particular the case of highly-damped poles. In this presentation, I show the properties of an extended class of discretisations based on Möbius transforms, both as mappings and discretisation schemes. I present and analyse design criteria corresponding to desirable properties of the discretised system in terms of pole structure.

Lecture. Location: Stage (317), Time: 2:20p

 

Anticipating Tempo Changes; Perceptual and Cognitive Processes

Emily Graber, Takako Fujioka

Actively anticipating a gradual tempo change in an isochronous beat stimulus is a top-down process that we hypothesize will affect beta-band power modulations in the brain. Previous research has shown that the rate of event-related synchronization in beta-band power is related to the tempo of the isochronous beat. Additionally, imagining a meter in the isochronous beat affects the beta-band power modulation depth. We present preliminary findings about the perceptual process of temporal anticipation and beta-band power modulations recorded via EEG.

Poster. Location: Neuromusic (103), Time: all day

 

Audity

Matt Horton, Andrew Forsyth

Audity is an iPhone app that lets you drop sounds in physical space. You can discover sounds that have been left around you and they are spacialized based on your proximity.

Poster / Demo. Location: PhD desk area (207) and Courtyard, Time: all day

 

Neural processing of multiple melodic voices: The role of motif identity

Madeline Huberth, Takako Fujioka

Musical textures can contain one or multiple motifs that repeat in different melodic lines (voices). The present study is aimed at examining whether differentiation between upper and lower voices is enhanced when each voice has its own motif. Thus, we recorded electroencephalogram (EEG) from 16 musicians, and compared neural encoding across voices and number of motifs present. The results suggest that voices are encoded differently when one versus two motifs are present, though encoding strength seems to depend on the content of the motifs themselves.

Poster. Location: Neuromusic (103), Time: all day

 

Music of the Spheres

Madeline Huberth

No sound in space, you say? In this planetary system, there is! Immerse yourself in an interpretation of the "Music of the Spheres". The relationship of the moons to the planet, as well as how fast the moons rotate on their own axes, are mapped to FM parameters, sonifying the system you're viewing. Planets and moons pulse brightly for every 'day' that passes on their surface, and the moons occasionally rearrange themselves, creating a new soundscape. Don't miss the musical comets!

Demo - VR. Location: VR Wonderland (211), Time: all day

 

Studying the effect of audio cues on driver awareness using the Stanford Driving Simulator Audio Engine

Mishel Johns, David Sirkin, Nikhil Gowda, Nik Martelaro, Wendy Ju, Romain Michon, Nick Gang, Matthew Wright, Chris Chafe, Sile O'Modhrain

The Center for Computer Research in Music and Acoustics in collaboration with the Center for Design Research at Stanford and Renault Silicon Valley developed the Stanford Driving Simulator Audio Engine based on the Faust programming language. This system is being used to study the effect of audio and visual cues in driver awareness in a simulated autonomous vehicle. This poster describes the audio system and the study design.

Poster. Location: Ballroom (216), Time: all day

 

The Music Engagement Research Initiative

Blair Kaneshiro, Jeffrey C. Smith, Jonathan Berger

The Music Engagement Research Initiative is a multidisciplinary research group at CCRMA headed by professor Jonathan Berger. We use a variety of data, from brain responses to large-scale industrial datasets, to explore how and why humans engage with music. In this talk we present an overview of our group, our academic and industrial collaborators, current research projects, and a new curriculum track at CCRMA focused on musical engagement.

Lecture. Location: Stage (317), Time: 2:00p

 

A Narrative Framework for Musical Engagement

Blair Kaneshiro, Jonathan Berger

Engaging listeners is an inherent goal of music. However, imprecise terminology regarding 'engagement' results in confusing, even contradictory, research. Here we consider whether narrative transportation and cognitive elaboration – terms used to describe states of response to story-based works such as novels or films – provide a useful theoretical framework for characterizing and quantifying musical engagement. We review recent studies that use inter-subject correlations of neurophysiological responses to measure immersive engagement with narrative works. We then consider the extension of such approaches to the realm of musical engagement through musical features that project temporal trajectories and goals, and may thus manipulate listener expectations in a manner analogous to narrative devices.

Poster. Location: Ballroom (216), Time: all day

 

The Laptop Accordion

Sanjay Kannan, Aidan Meacham

Imagine you're playing an accordion. Now grab your laptop, and hold it like one. Enter the Laptop Accordion: a new musical interface inspired by the real instrument but requiring only a piece of technology that nearly everyone has.

Demo. Location: MaxLab (201), Time: all day

 

A Virtual Acousmonium for Transparent Speaker Systems

Elliot Kermit-Canfield

Acousmatic diffusion techniques have a long tradition in computer music history. Unfortunately, most acousmonia are large systems that are challenging to maintain, upgrade, transport, and reconfigure. Additionally, their sole task is the diffusion of acousmatic music (music composed specifically to be broadcast through speakers that impart timbral and spatial effects). On the other hand, most computer music centers have incorporated multichannel sound systems into their studio and concert setups. In this talk, we propose a virtual acousmonium that decouples an arbitrary arrangement of virtual, colored speakers from a transparent speaker system that the acousmonium is projected through. Using ambisonics and an appropriate decoder, we can realize the virtual acousmonium on almost any speaker system. Our software automatically generates a GUI for metering and OSC/MIDI responders for control, making the system portable, configurable, and simple to use.

Lecture / Demo. Location: Stage (317), Time: 10:40a

 

CCRMA video reel

Dave Kerr

 

Installation. Location: Studio C (107) and Museum (116), Time: all day

 

Max Lab facilities display

Sasha Leitman

The Max Lab is the hub of what we call Physical Interaction Design at CCRMA. Named after Max Mathews, the Max Lab is where we focus on hardware and software interfaces for interacting with sound.

Lab demo. Location: MaxLab (201), Time: all day

 

Kuramoto Cycles

Nolan Lem

Kuramoto Cycles' reveals a network of coupled oscillators inspired from a mathematical model describing synchronization by the Japanese physicist Yoshiki Kuramoto (1940 - ). These visual and auditory particles exhibit the synchronistic behaviors inherent in many biological and chemical systems (e.g., firefly synchronicity, bioluminescent algae, pacemaker cells, etc.). This piece examines the self-organizing behaviors that emerge as a result of their communicative interplay. The outcome of the piece is entirely contingent, each oscillator bearing a role in the behavior of all the others in the group. The sonic and visual swarms that result can be directed, influenced, and provoked but are ultimately non-deterministic, non-linear, and communistic.

Installation. Location: Seminar Room (315), Time: all day

 

Music Analysis Through Visualization

Jia Li, Craig Sapp

We present analytic visualizations for selectively highlighting salient musical features, examining either micro or macro structures in each piece, from motivic pitch contour to large-scale form. At a glance, these visualizations allow a quick grasp of the structure and help a listener make connections between local features and global trends. Pitch, timbre and voicing are plotted against time to show large-scale patterns which would otherwise be difficult to intuit from a musical score and compare between different works. Music analysis through compositional data visualization not only makes sense to musicians but also to non-musicians, facilitating collaboration and exchange with artists and technicians in other media.

Poster. Location: Ballroom (216), Time: all day

 

Musical Analysis of Sonified Data

Ben Lovell, Cameron Turner

Sonification of the 2008 market crash financial data, and analysis of the resulting sound using Shazam.

Poster. Location: Ballroom (216), Time: all day

 

The *SpHEAR project, a family of parametric 3D printed soundfield microphone arrays

Fernando López-Lezcano

This is an evolving family of 3D printed, GPL/CC licensed soundfield microphone designs. The microphone assembly is 3D printed as separate parts, one for each capsule holder and a microphone mount. The capsule holders interlock together like a 3D puzzle and create the microphone assembly. This strategy was chosen to have parts that can be printed flat and without overhangs so they can be printed in low to medium price 3D printers that use fused filament fabrication technology. The 3D models currently include the TinySpHEAR, a four capsule tetrahedral microphone, the Octathingy, an 8 capsule design, and the BigSpHEAR 12 and 20 capsule proof of concept platonic solid models. The models are written in Openscad and are completely parametric.

Demo. Location: Listening Room (128), Time: all day

 

Listening Room Unhinged

Fernando López-Lezcano

The Listening Room was born in 2005 when The Knoll was renovated. It was conceived as a full 3D studio space with speakers above the listening space, around it and even below the acoustically transparent floor. It was upgraded to 22 channels in 2011 and boasts a custom diffusion system designed and built in-house. Come listen to music composed to use space fully, from several flavors of Dark Side of the Moon to acousmatic music spread out in 32 channels, from a 3D pop song remix by Jay Kadis, our audio engineer, to weird soundscapes that envelop you completely and literally take you places unheard.

Demo. Location: Listening Room (128), Time: all day

 

El Dinosaurio

Fernando López-Lezcano

Home-made analog synthesizer from the old days.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

Public Access

Jessie Marino

(Continuous installation; bathrooms usable.)

Installation. Location: 2nd floor Bathrooms (203&204), Time: all day

 

A hybrid method combining the edge source integral equation and the boundary element method for scattering problems

Sara R. Martín, U. Peter Svensson, Jan Šlechta, Julius O. Smith

A recently developed edge diffraction method (ESIE) is combined with the boundary element method (BEM) to model the scattered sound from convex, rigid objects at an attractive computational cost. The hybrid method suggested here has the same structure as the BEM: a first step where the sound pressure is calculated on the surface of the scattering object using the ESIE, and a second step where the scattered sound is obtained at any external receiver point by means of the Kirchhoff - Helmholtz Integral equation. Different benchmark cases are studied and the results are compared to reference methods.

Poster. Location: DSP Wonderland (305), Time: all day

 

Toward an Open-Source Ribbon Microphone

Aidan Meacham

This project is in pursuit of an inexpensive and open-source ribbon microphone, with a focus on readily available materials and a simple build process. Users only need to purchase foil, magnets, foam windscreen, a transformer, an XLR plug, a sheet of acrylic and a couple nuts and bolts to be able to assemble a unique and educational microphone. Acrylic provides an excellent platform for users, providing flexibility for those with access to a laser cutter, and simplicity for those who can simply send away for the precut parts from an appropriate cutting service.

Demo. Location: VR Wonderland (211), Time: all day

 

The Chanforgnophone, the BladeAxe, the PlateAxe, Nuance and the Quinarelle Orchestra

Romain Michon

Romain Michon will be presenting a series of musical instruments and art installations that he worked on during the past four years: the Chanforgnophone, the BladeAxe, the PlateAxe, Nuance and the Quinarelle Orchestra. Some of them leverage the concept of hybrid lutherie by combining acoustical and virtual elements. Others are based on "augmented" mobile devices.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

Granuleggs

Trijeet Mukhopadhyay, Alison Rush, David Grunzweig

The Granuleggs is a new music controller for granular synthesis which allows a musician to explore the textural potential of their samples in a unique and intuitive way, with a focus on creating large textures instead of distinct notes. Each controller is egg shaped, designed to fit the curve of your palm as you gyrate the eggs and tease your fingers to find yourself the perfect soundscape.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

Nothing is real...ity

Chryssie Nanou

This project explores the musical work "Nothing is Real (Strawberry Fields)" by composer Alvin Lucier for piano and amplified teapot. In the original version, Lucier notates a monophonic piano line derived from the Lennon/McCartney Beatles song "Strawberry Fields Forever". The performance is recorded in real-time, then played back through a small speaker within a real teapot. The performer is then instructed to move the teapot lid up and down, enhancing specific frequencies (each notated in the score) by changing the resonance of the teapot as a Helmholtz resonator. In this iteration, Lucier's teapot is recreated virtually in Unity3D, and the performer controls the height of the teapot's lid using a Leap Motion controller. The height of the lid is sent over Open Sound Control to ChucK where the value is used to drive the frequency of a ResonZ filter in chuck. Lucier's specified frequencies are loaded into an array and used to set the frequency of a BlowBotl, with the pitch of the lid used to select the current frequency.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

Automatic Music Structure Segmentation via Deep Learning

Tim O'Brien, Blair Kaneshiro

We approach the task of automatic music segmentation by musical form structure. We consider the rapidly evolving application of convolutional neural networks (CNNs). As CNNs have revolutionized the field of image recognition, especially since 2012, we investigate the current and future possibilities for such an approach to music, and specifically the task of structure segmentation. We implement a straightforward example of such a system, and discuss its preliminary performance as well as future opportunities.

Poster. Location: Ballroom (216), Time: all day

 

Formant-Wave-Function Synthesis using Second-Order Filters

Michael Jørgen Olsen, Julius Smith, Jonathan Abel

This demonstration will showcase Formant-Wave-Function (FOF) synthesis and it's ability to create realistic-sounding vocal synthesis. The demonstration will consist of a FOF software program created with FAUST audio programming language. The software will be able to generate different vowel sounds in different vocal ranges using the FOF synthesis technique.

Demo. Location: DSP Wonderland (305), Time: all day

 

ofxChucK: Rapid Graphics + Audio Prototyping in OpenFrameworks + ChucK

Tim O’Brien, Zhengshan Shi, Madeline Huberth, Jack Atherton, Sanjay Kannan, Ben Williams, Gina Gu, Adebia Ntoso, Ge Wang

ofxChucK is an OpenFrameworks (OF) addon that enables you to design interactive computer music software with ChucK! It adds several features to ChucK: VREntity: an object you can perform generic operations on; 3D and 4D vectors; VREntity properties location, scale, color, rotation; eval: interpret new commands in OF for rapid prototyping; create hierarchical relationships between objects in the scene; manipulate time and generate audio like you normally would in a ChucK script; access a shared ChucK + OF database; sync with the framerate.

Poster. Location: VR Wonderland (211), Time: all day

 

Music syntactic processing is influenced by integration of local and global harmonic structures: an ERP study

Irán Román, Takako Fujioka

In listening to Western tonal music, an unexpected out-of-key chord is known to elicit an ERP component called Early Right Anterior Negativity (ERAN) at right frontal electrodes compared to a standard in-key-chord. However, in more realistic musical pieces, a sense of key can constantly move from a key to another closely-related one. Such movements typically follow the global rule called 'circle of 5ths', which describes the relationship between two keys sharing most chords and scale tones. We recorded EEG from 12 participants to examine whether the ERAN to the out-of-key chord is reduced when preceding local patterns follow the global rule. We examined three conditions: Control (as in previous ERAN studies), Sequential, and Non-Sequential, all of which contained the same out-of-key chord, preceded by different chord patterns. The Sequential condition presented three repetitions of a local pattern including the out-of-key chord, while moving through different keys follow ing the global rule. In contrast, the Non-Sequential condition presented the same local pattern three times without following the global rule; this created jumps across unrelated keys. Compared to the Control condition, the ERAN in the Sequential condition was left-lateralized and delayed about 50ms. This suggests that the integration of local and global information for successful key motions may require left frontal neural resources, compared to simple processing of an out-of-key chord. Furthermore, a right-frontal positivity with latency around 365ms was found in the Non-Sequential condition, perhaps more related to local pattern violation rather than syntax processing.

Poster. Location: Neuromusic (103), Time: all day

 

A Brief History of Physical Modeling

Pat Scandalis, Julius Smith

A brief history of musical physical modeling will be presented along with sound examples and demonstrations on mobile devices. We are now in a place where each of us can be Jimi Hendrix with just a small device in the palm of our hands. Its a fun and deeply technical topic drawing on many fields including physics, acoustics, digital signal processing and music.

Lecture / Performance. Location: Stage (317), Time: 3:40p

 

emulator

Charlie Sdraulig, Sam Alexander

An accelerometer and a piezo contact microphone attached to the cymbal with magnets send data via a repurposed MIDI cable to an Arduino and an audio interface respectively. Changes in the accelerometer x-y-z values alter the millisecond length of sample playback within Max/MSP as well as the step of a random walk through the sample. The larger the change in sensor values from the cymbal’s resting position, the smaller the length and step values. Via envelope following, the piezo measures the amplitude of the player’s input to determine the gain of sample playback. Additionally, the piezo triggers samples of like amplitude to the live input. The end result is a generative sample-based context for the player to exist within and influence, or be influenced by in turn. The sensors were designed and built in collaboration with Sam Alexander. The composition you will hear today is by Charlie Sdraulig.

Musical Composition. Location: Stage (317), Time: 2:40p

 

Music Visualization in Virtual Reality

Zhengshan Shi

The progression of music is like time traveling. This demo presents a music tunnel - an immersive music visualizer in the Virtual Reality world where you can look into the past and the future of the music.

Demo - VR. Location: VR Wonderland (211), Time: all day

 

GeoShred - six physically modeled strings + FX in an iPad

Julius Smith, Nick Porcaro, Pat Scandalis

GeoShred is a musical instrument that offers a performance environment on a multi-touch surface. Physical modeling synthesis is used under the hood. GeoShred lets you create music using expressive physical modeling synthesis. You can achieve realistic guitar sounds, and also bend, stretch and manipulate the sound into endless possibilities. GeoShred’s unique, responsive user interface for performance and extensive sound design enables you to directly control dozens of model parameters and effects. With GeoShred, you have the power to make music that comes alive with expression, real controllable feedback, finger vibratos, note slides, power chords, auto-arpeggios and much, much more. GeoShred Features Include: Physically modeled guitar sound Highly expressive playing surface Modeled feedback Multiple Modeled effects Built in Arpeggiator Finger vibrato and slide Extensive editing capabilities Customizable control surface Alternate tuning and interval support Intelligent pitch rounding Supports Inter-App Audio as well as Audiobus Supports Air Turn devices for preset changes Easily share your presets with friends

Demo. Location: JOS Lab (306), Time: all day

 

GeoShred on the iPad 3

Julius Smith, Nick Porcaro, Pat Scandalis

This is GeoShred on an iPad 3 (see demo in JOS Lab for details) for everyone to try in the Musical Instrument Petting Zoo.

Demo - pettable instrument. Location: MaxLab (201), Time: all day

 

A Sound Defense

Byron Walker

Taking place in the listening room, players must react quickly and accurately to hear where musical lasers are coming from in order to deflect them in time. As you start to combo, music fills the room. Perform poorly, and the music will dwindle until it's snuffed out, ending the game. Come try!

Demo. Location: Listening Room (128), Time: all day

 

FAUST => ChucK => FaucK!

Ge Wang, Romain Michon

We present the latest abomination in music programming languages -- FaucK 2.0 -- which combines the succinct and powerful Functional AUdio STream (FAUST) language with the "strongly-timed" ChucK audio programming language! FaucK allows programmers to on-the-fly evaluate FAUST code directly from ChucK and control FAUST signal processors using ChucK's flexible, sample-precise timing and concurrency mechanisms. The goal is to create an amalgam that plays to the strengths of each language, giving rise to new possibilities for rapid prototyping, interaction design and controller mapping, pedagogy, and new ways of working with both FAUST and ChucK.

Demo. Location: MaxLab (201), Time: all day

 

Recent Advances in Wave Digital Filter Theory

Kurt James Werner

Our recent advances in Wave Digital Filter theory have advanced the field of virtual analog modeling. Come hear about the theoretical underpinnings of state-of-the-art models of tube amplifiers, guitar effect pedals, and other vintage audio gear.

Poster / Demos. Location: DSP Wonderland (305), Time: all day

 

WDF Simulation of the Hammond Vibrato and of the Fender Bassman Preamp

Kurt James Werner, Ross Dunkel

 

Demo. Location: DSP Wonderland (305), Time: all day

 

Jog-Meend

Matthew Wright

 

. Location: MaxLab (201), Time: all day

 

Dual Piano EEG Demonstration

Matthew Wright, Nick Gang, Wisam Reid, Madeline Huberth, Takako Fujioka

Prof. Takako Fujioka's NeuroMusic lab at CCRMA was specially constructed to have enough EEG channels and physical volume to perform brainwave-monitoring experiments on two people at the same time. This opens up vast possibilities to investigate joint musical listening and performance beyond the classical single-subject paradigm. Our first experimental platform supports investigation of dual-keyboard tasks in which a pair of subjects alternate short sequences of notes, taking turns to produce a full melody. Importantly, this class of experiments involves measuring a person's reaction to "deviant notes", where even though a pianist participant might play the correct key, a different note sounds. How does your brain react when you or your duet partner is "falsely accused" of a wrong note? And, how does your reaction change based on the musical material you and your partner are playing? An actual experimental session takes about 90 minutes and leaves your hair gooey with conductive gel, so we won't find out today. Instead, come play the keyboard with a friend or with the computer as your partner to see what it's like, and keep your brainwaves to yourself. Also marvel if you will at the good-sized Max patch that mediates the whole experiment, keeping track of shuffled trial conditions, practice trials, skynet mode, following the melody, detecting actual wrong notes, only deviating certain notes, avoiding polyphony, and sending trigger codes to the EEG-recording machine.

Lab demo. Location: Neuromusic (103), Time: all day

 

 

  • Home
  • News and Events
    • All Events
      • CCRMA Concerts
      • Colloquium Series
      • DSP Seminars
      • Hearing Seminars
      • Guest Lectures
    • Event Calendar
    • Events Mailing List
    • Recent News
  • Academics
    • Courses
    • Current Year Course Schedule
    • Undergraduate
    • Masters
    • PhD Program
    • Visiting Scholar
    • Visiting Student Researcher
    • Workshops 2022
  • Research
    • Publications
      • Authors
      • Keywords
      • STAN-M
      • Max Mathews Portrait
    • Research Groups
    • Software
  • People
    • Faculty and Staff
    • Students
    • Alumni
    • All Users
  • User Guides
    • New Documentation
    • Booking Events
    • Common Areas
    • Rooms
    • System
  • Resources
    • Planet CCRMA
    • MARL
  • Blogs
  • Opportunities
    • CFPs
  • About
    • The Knoll
      • Renovation
    • Directions
    • Contact

Search this site:

Winter Quarter 2023

101 Introduction to Creating Electronic Sound
158/258D Musical Acoustics
220B Compositional Algorithms, Psychoacoustics, and Computational Music
222 Sound in Space
250C Interaction - Intermedia - Immersion
251 Psychophysics and Music Cognition
253 Symbolic Musical Information
264 Musical Engagement
285 Intermedia Lab
319 Research Seminar on Computational Models of Sound
320B Introduction to Audio Signal Processing Part II: Digital Filters
356 Music and AI
422 Perceptual Audio Coding
451B Neuroscience of Auditory Perception and Music Cognition II: Neural Oscillations

 

 

 

   

CCRMA
Department of Music
Stanford University
Stanford, CA 94305-8180 USA
tel: (650) 723-4971
fax: (650) 723-8468
info@ccrma.stanford.edu

 
Web Issues: webteam@ccrma

site copyright © 2009 
Stanford University

site design: 
Linnea A. Williams