CCRMA Open House 2017
On Friday March 3, we invite you to come see what we've been cooking up at the Knoll!
Join us for lectures, hands-on demonstrations, posters, installations, and musical performances of recent CCRMA research including digital signal processing, data-driven research in music cognition, and a musical instrument petting zoo.
Schedule overview
10-12: Presentations throughout the Knoll12-1:30: [lunch break]
1:30-2:20: Keynote lecture: Ryan Groves "Building Artificially Intelligent Musical Composers"
2-5: Presentations throughout the Knoll
Facilities on display
Neuromusic Lab: EEG, motion capture, brain science...Listening Room: multi-channel surround sound research
Max Lab: maker / physical computing / fabrication / digital musical instrument design
Hands-on CCRMA history museum
JOS Lab: DSP research
CCRMA Stage: music and lectures
Recording Studio
Schedule for the Stage: Lectures and a Concert
Lectures
10:40-11: Fernando Lopez-Lezcano, Christopher Jette `Enhanced Diffusion on the CCRMA Stage'
11:00-11:20: Gregory Pat Scandalis `A Brief History Of Musical Synthesis'
11:20-11:40: Prof. Doug James `Advances in Physics-Based Sound Synthesis'
11:40-12: Madeline Huberth, Nolan Lem, Tysen Dauer `Music 251 Course Overview and Final Projects'
[lunch break]
1:30-2:20: Ryan Groves `Building Artificially Intelligent Musical Composers' (Keynote lecture)
2:20-2:40: Blair Kaneshiro, Jonathan Berger `The Music Engagement Research Initiative'
Concert (3:00-4:20)
Holly Herndon Chorus (2014)
Matthew Wright, Alex Chechile, Julie Herndon, Mark Hertensteiner, Christopher Jette, Justin Yang Feedback Network (2017)
John Chowning stria (1977)
Chrysi Nanou Duet for One Pianist, Eight Sketches for midi piano and computer by Jean-Claude Risset (1989)
Constantin Basica Chapter 31, pages 415-926 (performed by the JACK Quartet and the Spektral Quartet) (2016)
Eoin Callery From Strands To Strings (2017)
(after the concert): Hands-on demo of the interactive piano from Duet for One Pianist
List of Exhibits (lectures, performances, demos, posters...)
Alphabetical by author
Hysteria
Rahul Agnihotri
This project aims to demonstrate the use of fundamental frequency estimation as a gateway to create a non-interactive system where computers communicate with each other using musical notes. Frequency estimation is carried out using the YIN algorithm implemented using the ofxAubio add-on in openFrameworks. This system is used to simulate a conversation between people at a round-table event, but using computers.
Poster/Demo. Location: Recording Studio (124), Time: all day
I Said it First: Topological Analysis of Lyrical Influence Networks
Jack Atherton, Blair Kaneshiro
We present an analysis of musical influence using intact lyrics of over 550,000 songs, extending existing research on lyrics through a novel approach using directed networks. We form networks of lyrical influence over time at the level of three-word phrases, weighted by tf-idf. An edge reduction analysis of strongly connected components suggests highly central artist, songwriter, and genre network topologies. Visualizations of the genre network based on multidimensional scaling confirm network centrality and provide insight into the most influential genres at the heart of the network. Next, we present metrics for influence and self-referential behavior, examining their interactions with network centrality and with the genre diversity of songwriters. Here, we uncover a negative correlation between songwriters’ genre diversity and the robustness of their connections. By examining trends among the data for top genres, songwriters, and artists, we address questions related to clustering, influence, and isolation of nodes in the networks. We conclude by discussing promising future applications of lyrical influence networks in music information retrieval research. The networks constructed in this study are made publicly available for research purposes.
Poster/Demo. Location: Ballroom (216), Time: all day
Chapter 31, pages 415-926
Constantin Basica, Performed by the JACK Quartet & the Spektral Quartet
"Chapter 31, pages 415-926" is a piece for string octet and video, which was inspired by the shape of a circle and the concept of infinity. It is said that the number known as Pi, which is believed to have an infinite non-repeating sequence of digits, could include all possible combinations of numbers, and that one could find all the history of the universe in it. For example, looking through the infinite digits and converting the numbers to ASCII text or bitmap information, one could find all the texts ever written and all the images that ever existed. For this piece, specific strings from Pi’s digits were extracted and filtered algorithmically in Max, then applied in MaxScore to a series of one hundred chords in order to generate musical material. The same digits were used to determine the length of each video clip. All the video material was shot in California and represents a diversity of banal imagery, which reinforces the idea of Pi encompassing everything—even things that are usually overlooked.
Recorded 8-channel string octet performance with video. Location: Stage (317), Time: Concert (3:00-4:20) piece 5/6
The Contrenot
Paul Batchelor
The Contrenot is a musical interface designed to emulate the mechanics of a bowed upright bass. It has a form factor similar to that of an electric upright bass. The neck consists of a high-resolution linear softpot and FSR for monophonic pitch and aftertouch detection. The body houses a pull-string sensor, created from a gutted tape measure spring, a high resolution incremental rotary encoder, and custom 3D-printed parts. The pull-string sensor is able to detect velocity and motion very precisely, allowing for very nuanced control similar to bowing a bass. The Contrenot plugs into a computer to synthesize sound.
Demo - pettable instrument. Location: MaxLab (201), Time: all day
Overview of the Different Audio Coding Schemes Defined in MEG-1 and MPEG-2
Marina Bosi
Dr. Bosi's course Music 422 "Perceptual Audio Coding" covers the theory and practice of technologies that can represent musical audio with reduced amounts of data by exploiting aspects of human perception. Dr. Bosi has kindly agreed to open this week's lecture to the general public as part of the Open House; feel free to drop in.
Open class session. Location: Classroom (217), Time: 2:30-4:20
From Strands To Strings
Eoin Callery
This piece passes the guitar and occasionally the inbuilt laptop speakers through a series of overlapping automated limited band-pass filtered feedback patches, controlled with SuperCollider. Occasionally the Supercollider patches are further processed - EQ, Reverb, and Thomas Mundt's amazing Loudmax Limiter - in Logic Audio.
Musical Performance. Location: Stage (317), Time: Concert (3:00-4:20) piece 6/6
The Modeling Shift
Chris Chafe
Music from deep time to nowadays with deep networks charts the growth of artificial tools for sound and music creation. A few chosen milestone examples illustrate how models in the digital age are more than that, they become things we use to make and appreciate music. Instruments, automata, assistants, the art of music has always been an earlier adopter and a domain which pushes the bounds of complexity.
15-minute Presentation/Lecture. Location: Stage (317), Time: 10:15-10:40
45°30′54″N 25°22′02″E
Alex Chechile, Constantin Basica, Jonathan Abel
"Welcome to my house. Enter freely. Go safely, and leave something of the happiness you bring."
Demo/Preview. Location: Grad Workspace (305), Time: all day
Automatic Music Chords Generation System
Ziheng Chen, Jie Qi, Yifei Zhou
This project applies machine learning techniques to generate music chords to follow a given melody. We compared the performance of different machine learning algorithms, and chose Hidden Markov Model for this project. Using MusicXML format lead sheets as training dataset, we calculated chord observation probabilities and chord transition probabilities. We then implemented the Viterbi algorithm to compute the most likely sequence of chords for a new melody.
Poster/Demo. Location: Studio C (107), Time: 10:30-12 and 2-5
Virtual Audio Player -- An Augmented Reality Project
Ziheng Chen
Virtual Audio Player is an augmented reality project implemented in openFrameworks. Users can design and draw their very own audio player with markers, and play with it using fingers. A web camera is used to detect the finger position and audio player elements. The computer screen also shows waveform, slider value and other elements to interact with the drawing.
Poster/Demo. Location: Studio C (107), Time: 10:30-12 and 2-5
stria (1977)
John Chowning
Chowning received one of IRCAM's first commissions from Luciano Berio to compose stria for the institute's first major concert series presented by Pierre Boulez, Perspectives of the 20th Century and premiered October 13, 1977 at the Centre Pompidou. Stria was realized during the summer-autumn of 1977 at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA) on a Foonly F4 (DEC PDP-10) computer.
The composition was reconstructed in 2007 by Kevin Dahan and Olivier Baudouin and described in The Computer Music Journal, Autumn-Winter, 2007 [CMJ 31, 3-4]. The version presented here is by K. Dahan.
The work is based on the unique possibilities in computer synthesis of precise control over the spectral components or partials of a sound. Most of the music we hear is composed of sounds whose partials are harmonic or in the harmonic series. In stria, a non-tonal division of the frequency space is based on a ratio, which is also used to determine the relationships between the inharmonic spectral components. The ratio is that of the Golden Section (or Golden Ratio) from antiquity, 1.618, which in this unusual application yields a certain transparency and order in what would normally be considered "clangorous" sounds. The composition of the work was dependent upon computer program procedures, specially structured to realize the complementary relationship between pitch space (scale) and spectral space (timbre). In addition, these procedures are at times recursive allowing musical events that they describe to include the same events within themselves in a compressed form.
Seminal computer music composition with historical context. Location: Stage (317), Time: Concert (3:00-4:20) piece 3/6
Guitar Multi-effects
Orchiasma Das
Digital simulation of several classic guitar effects.
Demo - pettable instrument. Location: MaxLab (201), Time: 10:30-12 and 3:30-5
Dynamic Video
Abe Davis, Justin Chen, Fredo Durand, Doug James
One of the most important ways that we experience our environment is by manipulating it: we push, pull, poke, and prod to test hypotheses about our surroundings. By observing how objects respond to forces that we control, we learn about their dynamics. Unfortunately, regular video does not afford this type of manipulation – it limits us to observing what was recorded. In this work we present algorithms for turning regular video of vibrating objects into dynamic videos that users can interact with. By analyzing subtle vibrations in video, we can extract plausible, image-space simulations of physical objects. We show how these simulations can be used for interaction, as well as for low cost special effects and structural analysis.
Poster/Demo. Location: Grad Workspace (305), Time: all day
Ambisonic Mixing Bowl
Nick Gang, Wisam Reid
This project seeks to provide spatial audio artists and engineers with a tactile interface for real-time ambisonic panning. Users move and rotate physical magnetic objects around the surface of an acrylic dome. Positions and angles of rotation are tracked with an under-mounted camera and lighting system, processed with reacTIVision's open source computer vision software, and sent to Spat to control the spatialization. The dome's spherical shape mimics possible source locations surrounding the listener. The sound source symbols inform the user of the state of the system, and allow for immediate changes in three dimensions with one gesture.
Poster/Demo. Location: Listening Room (128), Time: 10-11 and 2-3:30
Does Anticipating a Tempo Change Systematically Modulate EEG Beta-band Power?
Emily Graber, Takako Fujioka
Neural oscillatory activities are known to relate to various sensorimotor and cognitive brain functions. Beta-band (13-30Hz) activities are specifically associated with the motor system; they are characterized by a decrease in power prior to and during movement (event-related desynchronization, ERD), and an increase in power after movement (event-related synchronization, ERS). Recent research has shown that listening to isochronous auditory beats without motor engagement also induces beta-band power fluctuations in synchrony with the beat; the rate of ERS is proportional to the beat rate suggesting that ERS may be related to temporal prediction. Top-down processes such as imagining march or waltz meters also induce modulations related to the temporal structure of the particular meter. Here we hypothesize that beta-band modulations may reflect top-down anticipation in other timing-related tasks that commonly occur in music performance. Specifically, we examine how the processes of actively anticipating two types of gradual tempo changes (accelerando and ritardando) are reflected in beta-band dynamics.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Audio Visual Toy Collection: mooBot, Warthog, vicVortex and Ambscape
Victoria Grace
A collection of plug-in and standalone audio applications I made for sound creation and exploration.
Demo - pettable instruments. Location: MaxLab (201), Time: all day
Building Artificially Intelligent Musical Composers
Ryan Groves
This talk will present some of the challenges faced when trying to build artistic machines. Through his work on Ditty, the automated musical messenger, as well as his work on creating adaptive musical scores for video games at Melodrive, Ryan will highlight the different musical components of the vast topic of automatic musical composition. Given his background of computational music theory, Ryan will emphasize the importance of building and validating machine-learning models that can perform particular musical tasks, and leveraging those to create artificially intelligent compositional agents.
Keynote Lecture (50 minutes). Location: Stage (317), Time: 1:30-2:20
ERP Responses to Congruent and Incongruent Audiovisual Pairs of Segments from Transforming Speech to Song Phrases
Julie Herndon, Auriel Washburn, Takako Fujioka
Speech and song are both found within most human societies, with individuals typically capable of differentiating between the two without concerted effort. The Speech-to-Song Illusion occurs when short speech phrases come to be perceived as song after several consecutive repetitions. Notably, this illusion occurs for some speech phrases, but not all, suggesting that there is something unique about the characteristics of ‘transforming’ phrases that leads to the eventual perception of song after repetition. In the current study, several transforming phrases were used to examine the association between the pitch contour and the written form of the words in a shortened segment of the original phrase. Findings indicated a visual N1 response was greater when pitch contours and congruent written stimuli were provided as compared to incongruent stimuli pairs.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Chorus
Holly Herndon
Computer music video. Location: Stage (317), Time: Concert (3:00-4:20) piece 1/6
Hzpiral
Mark Hertensteiner
Hzpiral is a polyphonic playable pitch-space in polar coordinates, developed with openFrameworks and Faust. Constant angle from the center is pitch chroma and radius is pitch height on a rotating playing area. Four custom-set chords are playable from a single touch. Control ADSR envelope, low-pass filter cutoff frequency and resonance, tremolo rate and depth, waveshape mix between sine, sawtooth, square, and triangle waveforms, and pitch-quantization, in addition to lockable sustain and sostenuto. See it in action at ccrma.stanford.edu/~hert/Hzpiral/. Experience the visual mathematics of harmony!
Demo - pettable instrument. Location: MaxLab (201), Time: all day
Do Violinists' Gestures Reflect Voice Entries in Implied Polyphony? A Motion Capture Study
Madeline Huberth, Takako Fujioka
Implied polyphony is a musical structure in which a monophonic melody is perceived to outline multiple sources, or `voices'. Performers may express implied polyphony in their sound using dynamics and rubato, but it is currently unknown if musicians also embody the voices. In this motion capture study, we investigate if and how violinists' body motions indicate a change in implied voice at a specific moment in time, as well as characteristics of general motion during passages containing implied polyphony.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Music 251 Course Overview and Final Projects
Madeline Huberth, Nolan Lem, Tysen Dauer
Music 251, Psychophysics and Music Cognition, aims to introduce students to basic concepts and fundamentals of auditory perception, as well as music perception and cognition. The primary emphasis is placed upon experiencing all the processes of discovery-oriented human behavioural research and core issues in methodology as well as scientific discourse with colleagues. This talk will provide an overview of the course (happening this term), as well as present a selection of students’ proposed final projects.
15-minute Presentation/Lecture. Location: Stage (317), Time: 11:40-12
Performers’ Motions Reflect Their Intention to Express Local or Global Structure in Melody
Madeline Huberth, Takako Fujioka
Performers often must choose to emphasize short melodic groupings, or to more strongly integrate these groupings into a phrase. This study aimed to characterize the nature of motions associated with either choice. We filmed 12 cellists playing a musical excerpt in two conditions in which they were asked to think about either short or long melodic groupings, where groupings in both cases were specified. The results show that, overall, participants’ heads move more frequently when thinking about short groupings compared to long groupings. Our results illustrate that different melodic grouping interpretations by the same performer can be embodied.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Action-Monitoring in Piano Duet Performances
Madeline Huberth, Tysen Dauer, Iran Roman, Chryssie Nanou, Wisam Reid, Nick Gang, Matthew Wright, Takako Fujioka
Music ensembles involve complex social interactions between players in which coordination of actions and monitoring of outcomes are crucial in achieving joint goals. Previous event-related potential (ERP) studies of joint-action tasks have shown that the feedback-related negativity (FRN) is elicited when the outcome of one’s own and another’s actions is different from what is expected. One of the Neuromusic lab’s present interests is to examine if the FRN and oFRN differ depending on the strength of the joint goal in a piano duet task. This demonstration shows how our lab elicits the FRN in participants by altering the pitch feedback of piano keyboards.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Advances in Physics-Based Sound Synthesis
Doug James
Decades of advances in computer graphics have made it possible to convincingly animate a wide range of physical phenomena, such as fracturing solids and splashing water. Unfortunately, our visual simulations are essentially "silent movies" with sound added as an afterthought. In this talk, I will describe recent progress on physics-based sound synthesis algorithms that can help simulate rich multi-sensory experiences where graphics, motion, and sound are synchronized and highly engaging.
15-minute Presentation/Lecture. Location: Stage (317), Time: 11:20-11:40
CCRMA Studio Tour and Demo Reel of Recent CCRMA Recordings
Jay Kadis
Drop-in studio tours and music listening. Location: Control Room (127), Time: 10-12 and 2-3
Using Synchrony of Cortical, Physiological, and Behavioral Responses to Index Listener Engagement with Naturalistic Music
Blair Kaneshiro, Jacek P. Dmochowski, Duc T. Nguyen, Anthony M. Norcia, Jonathan Berger
Temporal reliability of cortical responses, indexed by inter-subject correlation (ISC), has been shown to predict audience engagement with narrative works. Here we present results from two EEG-ISC experiments assessing listener engagement with naturalistic music. Intact musical excerpts produce more consistent cortical components and higher ISC than phase-scrambled controls. Moreover, ISC analyses can also be applied to physiological and continuous behavioral responses. In sum, results suggest that EEG-ISC may be a promising approach to studying responses to full-length musical works in a single-listen experimental paradigm.
Poster/Demo. Location: Ballroom (216), Time: all day
The Music Engagement Research Initiative
Blair Kaneshiro, Jonathan Berger
The Music Engagement Research Initiative is a multidisciplinary research group at CCRMA headed by professor Jonathan Berger. Through various approaches -- including laboratory studies, analysis of large-scale industrial data, and development of open-source research software and datasets -- we explore how and why people engage with music. In this talk we will present an overview of the group and selected current research projects. More information is available at https://ccrma.stanford.edu/groups/meri/
15-minute Presentation/Lecture. Location: Stage (317), Time: 2:20-2:40
Large-Scale Music Discovery Behavior: Effects of Genre and Geography
Blair Kaneshiro, Lewis Kaneshiro, Casey W. Baker, Jonathan Berger
Music discovery has become a prevalent pastime, yet generalizations of this behavior remain elusive. Here we investigate music discovery on a large scale using data from the audio identification service Shazam. We aggregated geo-tagged United States Shazam queries corresponding to Billboard Top-20 Pop, Country, and R&B/Hip-Hop songs from October 2014 to March 2015. Query locations were labeled by Nielsen Designated Market Areas (DMAs). We visualize chart performance and Shazam query volume over time, as well as examples of geotemporal dynamics of Shazam queries; impact of special events on query volume; and the relationship between query volume and Billboard chart position.
Poster/Demo. Location: Ballroom (216), Time: all day
Music Maker: 3D Printing and Acoustics Curriculum
Sasha Leitman, John Granzow
Music Maker is a free online resource that provides files for 3D printing woodwind and brass mouthpieces and tutorials for using those mouthpieces to learn about acoustics and music. The mouthpieces are designed to fit into standard plumbing and automobile parts that can be easily purchased at home improvement and automotive stores. The result is a musical tool that can be used as simply as a set of building blocks to bridge the gap between our increasingly digital world of fabrication and the real-world materials that make up our daily lives.
An increasing number of schools, libraries and community groups are purchasing 3D printers but many are still struggling to create engaging and relevant curriculum that ties into academic subjects. Making new musical instruments is a fantastic way to learn about acoustics, physics and mathematics.
Poster/Demo. Location: MaxLab (201), Time: all day
The SpHEAR Project, a Family of 3D Surround Microphone Arrays
Fernando Lopez-Lezcano
The *SpHEAR (Spherical Harmonics Ear) project is an evolving family of 3D printed, GNU Public License/Community Commons licensed soundfield microphones that include hardware designs for the microphone array and interface electronics, and all the software needed to perform an automated and accurate calibration of the finished microphone array. Ambisonics surround recording and processing technology, which the SpHEAR family embodies, is now widely recognized as being a perfect fit for the requirements of the exploding field of Virtual Reality. Come and look at the current finished prototypes and listen to examples of real recordings.
Poster/Demo. Location: Listening Room (128), Time: 11-12 and 3:30-5
Enhanced Diffusion on the CCRMA Stage
Fernando Lopez-Lezcano, Christopher Jette
The stage is getting more speakers and they will each have an individual address. We will keep the existing 16.8 system in place and add an additional 32 speakers. This is being undertaken in order to create a 5th order ambisonic dome. Our talk will be an update on the technical and logistical aspects of this upgrade with details on our hanging prototypes and challenges encountered.
15-minute Presentation/Lecture. Location: Stage (317), Time: 10:40-11
Classification of Auditory Stimuli Using Auditory Evoked Potentials
Steven Losorelli, Blair Kaneshiro, Gabriella Musacchia, Matthew Fitzgerald, Nikolas Blevins
An open question in cochlear implant research is whether users receive auditory signals needed to discriminate sound, but have not yet learned to interpret them; or whether they do not receive the auditory cues necessary to discriminate certain acoustical features. With children and difficult cases, this makes the implant fitting process challenging for audiologists, potentially limiting patient performance. Patients may therefore benefit from an objective method of auditory discrimination.
In this study, we demonstrate that classification of scalp-recorded auditory evoked potentials can provide an objective measure of hearing discrimination in normal hearing individuals. EEG classification performed on single-trial cortical responses and group-averaged envelope frequency following responses (FFR) recorded at the auditory brainstem yielded statistically significant above-chance classification accuracy.
Poster/Demo. Location: Ballroom (216), Time: all day
Representation of Musical Beat in Scalp Recorded EEG Responses: A Comparison of Spatial Filtering Techniques
Steven Losorelli, Blair Kaneshiro, Jonathan Berger
Humans have the innate ability to perceive an underlying beat in complex musical signals. Despite the ease and speed with which our brains are able to accomplish this task, the neural mechanisms underlying the cognitive processing of beat and meter are unknown, and computational extraction of beat and other acoustical features from audio remain open topics of research.
Steady-state evoked potentials (SS-EPs) are natural brain responses to visual or auditory information at specific frequencies. In the realm of music, recent studies have shown that neural entrainment to beat and meter can manifest through a steady-state evoked potential captured in scalp-recorded EEG data.
Here we extend this research and take a spatial-filtering approach to analyzing EEG responses to musical stimuli with a steady beat. Spatial filtering techniques derive linear weightings of electrodes according to specific criteria. The present analysis focuses on two such techniques: Principal Components Analysis (PCA), which maximizes variance, and Reliable Components Analysis (RCA), which maximizes mutual correlation. Using these techniques, it may thus be possible to consolidate beat- related cortical activity into lower-dimensional subspaces of the data.
Poster/Demo. Location: Ballroom (216), Time: all day
Physical model of a drum using FDTD schemes and an edge-diffraction IR approach
Sara Martin, U. Peter Svensson, Mark Rau, Julius Smith
Physical modeling of musical instruments has always been of great interest. The physics of the instrument and its surrounding can be exploded and modeled to create faithful sound synthesis. In this poster we present a physical model of a drum using a hybrid model between the FDTD schemes suggested by S. Bilbao (JAES, 2013) and the Edge Diffraction model suggested by A. Asheim and P. Svensson (JASA, 2013).
Poster/Demo. Location: Grad Workspace (305), Time: all day
SmartKeyboard iOS and Android App Generator
Romain Michon, Julius O. Smith, Yann Orlarey, Chris Chafe
SmartKeyboard is a tool to generate musical Android and iOS applications using the Faust programming language. Standard Faust UI elements are replaced by a highly customizable interface that can be used to capture a wide range of gestures on a touch-screen. Polyphony MIDI support as well as built-in sensor control can also be easily added to the generated apps.
Poster/Demo. Location: MaxLab (201), Time: all day
Passively Augmenting Mobile Devices Towards Hybrid Musical Instrument Design
Romain Michon, Julius O. Smith, Matthew Wright, Chris Chafe, John Granzow, Ge Wang
Mobile devices constitute a generic platform to make standalone musical instruments for live performance. However, they were not designed for such use and have multiple limitations when compared to other types of instruments. We present a framework to quickly design and prototype passive mobile device augmentations to leverage existing features of the device for the end goal of mobile musical instruments.
Poster/Demo. Location: MaxLab (201), Time: all day
faust2api: a Comprehensive API Generator for Android and iOS
Romain Michon, Julius Smith, Chris Chafe, Stéphane Letz, Yann Orlarey
Faust2api is a tool to generate custom DSP engines for Android and iOS using the Faust programming language. Faust DSP objects can easily be turned into MIDI-controllable polyphonic synthesizers or audio effects with built-in sensors support, etc. The various elements of the DSP engine can be accessed through a high-level API, made uniform across platforms and languages.
Poster/Demo. Location: MaxLab (201), Time: all day
Faust Physical Modeling Library and ToolBox
Romain Michon, Sara Martin, Yann Orlarey, Julius O. Smith, and Chris Chafe
We present a series of tools to simplify the design of physical models of musical instruments in the Faust programming language: the Faust physical modeling library, stl2faust, and imp2faust. The Faust physical modeling library allows to construct modular multidimensional diagrams with bidirectional connections to implement various types of physical models. It also contains a wide range of pre-made elements (e.g., strings, resonators, tubes, etc.) that can be used to build virtual musical instruments at a high level. stl2faust is a tool to convert STL CAD files into Faust modal physical models compatible with the Faust physical modeling library. imp2faust turns an impulse response into a Faust modal physical models compatible with the Faust physical modeling library.
Poster/Demo. Location: MaxLab (201), Time: all day
Romain Michon’s Menagerie for the Petting Zoo
Romain Michon
A wide range of musical instruments and controllers: the BladeAxe, the PlateAxe, the ModAxe, Nuance, and the Chanforgnophone.
Demo - pettable instruments. Location: MaxLab (201), Time: all day
Duet for One Pianist, Eight Sketches for MIDI Piano and Computer by Jean-Claude Risset (1989)
Chryssie Nanou
In 1989, composer and researcher Jean-Claude Risset‘s series of interactive sketches for piano and Disklavier entitled Duet for One Pianist explored the performative possibilities made available to pianists through the augmentation of emotive human musical gesture with the precise reactive and computational capabilities afforded by computer-based musical systems.
The composer explored simple compositional relations between the pianist's part and the computer's part: translations or symmetries in the time-frequency space (pitch transpositions or interval inversions); triggering by the pianist of specific patterns (arpeggios) or stored musical sequences influenced by certain performance parameters (tempo or loudness of the notes played); canon-like imitation. (https://chrysinanou.wordpress.com/portfolio/duet-for-one-pianist)
Musical Performance with Interactive Piano. Location: Stage (317), Time: Concert (3:00-4:20) piece 4/6
Hands-on Demo: Duet for One Pianist, Eight Sketches for MIDI Piano and Computer by Jean-Claude Risset (1989)
Chryssie Nanou
Come try the CCRMA Stage Disklavier running the Max patches for Jean-Claude Risset's "Duet for One Pianist" that Chryssie Nanou will have just performed in the concert.
Pettable Interactive Piano. Location: Stage (317), Time: 4:30-5 (after concert)
Verovio Humdrum Viewer
Craig Stuart Sapp
Verovio Humdrum Viewer (http://verovio.humdrum.org) is an online open-source music notation editor that is under development for digital editions of music, such as the Josquin Research Project (http://josquin.stanford.edu) and an edition of Chopin's music at The Fryderyk Chopin Institute (http://en.chopin.nifc.pl/institute) in Warsaw, Poland. The editor dynamically displays notation for musical data in the Humdrum format (http://www.humdrum.org), as well as importing MusicXML files and exporting MEI files (http://www.music-encoding.org). Animated playback of scores with MIDI and audio recordings will be demonstrated. Here is a sample set of Mozart piano sonatas being prepared with the editor: http://verovio.humdrum.org/?file=mozart/sonatas .
Demo. Location: Ballroom (216), Time: all day
A Brief History of Musical Synthesis
Gregory Pat Scandalis
The roots of modern musical synthesis stretches back to the early nineteenth century. A number of important examples will be highlighted along with sound examples.
15-minute Presentation/Lecture. Location: Stage (317), Time: 11:00-11:20
Resent JOS Research at CCRMA
Julius O. Smith, Jonathan Abel, Ericka Amborn, Chris Chafe, Weinong Chen, Zoran Cvetkovic, Enzo De Sena, Craig Doolittle, Ross Dunkel, Huseyin Hacihabiboglu, Christoph Hohnerlein, James Johnston, Hyung-Suk Kim, Esteban Maestre, Sara Martin, Aidan Meacham, Romain Michon, Vaibhuv Nangia, Michael Olsen, Julian Parker, Nick Porcaro, Maximilian Rest, Jordan Rudess, Lauri Savioja, Pat Scandalis, Gary Scavone, Jan Slechta, Harrison F. Smith, Peter Svensson, Vesa Valimaki, Ge Wang, Kurt Werner, and Matthew Wright
These posters will briefly summarize the following research publications on which I collaborated over the past year:
- ``Closed Form Fractional Integration and Differentiation via Real Exponentially Spaced Pole-Zero Pairs''
- ``More Than 50 Years of Artificial Reverberation''
- ``Digital Waveguide Network Reverberation in Non-Convex Rectilinear Spaces''
- ``A hybrid method combining the edge source integral equation and the boundary element method for scattering problems''
- ``A general and explicit formulation for wave digital filters with multiple/multiport nonlinearities and complicated topologies''
- ``The Fender Bassman 5F6-A Family of Preamplifier Circuits---A Wave Digital Filter Case Study,''
- ``Nuance: Adding Multi-Touch Force Detection to the iPad''
- ``Continuous Order Polygonal Waveform Synthesis''
- ``Design of Recursive Digital Filters in Parallel Form by Linearly Constrained Pole Optimization''
- ``Experimental Modeling of Bridge Admittance and Body Radiativity for Efficient Synthesis of String Sound by Digital Waveguides''
- ``Perceptually-motivated Spatial Audio Recording, Simulation, and Rendering''
- ``Development of Digital Filter Methodology for Hopkinson Bar Dispersion Correction''
- ``Real Time Performable Instruments on iPad and iPhone''
Lab full of posters. Location: JOS Lab (306), Time: TBD
FaucK!! Hybridizing the Faust and ChucK Audio Programming Languages
Ge Wang, Romain Michon
FaucK is a hybrid audio programming environment which combines the powerful, succinct Functional AUdio STream (Faust) language with the strongly-timed ChucK audio programming language. FaucK allows programmers to on-the-fly evaluate Faust code directly from ChucK code and control Faust signal processors using ChucK’s sample-precise timing and concurrency mechanisms. The goal is to create an amalgam that plays to the strengths of each language, giving rise to new possibilities for rapid prototyping, interaction design and controller mapping, pedagogy, and new ways of working with both Faust and ChucK. We present our motivations, approach, implementation, and preliminary evaluation.
Poster/Demo. Location: MaxLab (201), Time: all day
Integrated All-Frequency Physics-Based Sound Synthesis for Physics-Based Animation
Jui-Hsien Wang, Doug James
In this project, we aim to build a general finite-difference time-domain wave solver for high-quality sound synthesis for physics-based animation. We will introduce novel algorithms used in our system to support animated objects with various sound source models, including linear modal model, acceleration sound (clicks) model, and water bubble sound model. We will discuss how to optimize the integration of these models onto our system, and challenges in mapping the current system onto state-of-the-art parallel computer architecture, such as GPUs.
Poster/Demo. Location: Grad Workspace (305), Time: all day
Effects of Musical Roles on Coordinated Timing Asymmetries in Piano Duet Performance
Aury Washburn, Matthew Wright, Takako Fujioka
Recent work in human behavioral dynamics has begun to identify conditions under which functionally asymmetric behavioral interaction between individuals arises and results in sustainable coordinative patterns, which allow for the successful completion of joint goals. Prior investigation of the temporal relationships between members of a string quartet has demonstrated that even though performers within this kind of ensemble already have distinct roles that are generally related in a hierarchical fashion, different patterns of following and mutual adaptation occur at different timescales during performance. In order to better understand and model the evolution of such asymmetrical behavior during successful ensemble performance, the current study examined the influence of asymmetries in factors such as the complexity, and pitch range of the two musical parts on the collective temporal stability and relative adaptability exhibited by pianists during duet performance. Preliminary results indicate that temporal asynchronies, as measured at each point where temporal synchrony would be expected based on the musical score, are quantitatively greater when the performers’ parts included some degree of asymmetry to begin with.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day; Dr. Washburn present 3:30-5
Feedback Network Ensemble
Matthew Wright, Alex Chechile, Julie Herndon, Mark Hertensteiner, Christopher Jette, Justin Yang
A structured musical improvisation in which an ensemble of players "plug our instruments into each other" to excite and control a sparsely-connected feedback delay network. Each player's instrument receives an audio input as well as generating an audio output, allowing to connect all of them to a matrix mixer so we can effect changes in the graph topology and adapt to them in realtime. See https://nime2015.lsu.edu/proceedings/329/index.html
Live Musical Performance. Location: Stage (317), Time: Concert (3:00-4:20) piece 2/6