CCRMA Open House 2018
On Friday March 2, 2018, we invite you to come see what we've been doing up at the Knoll!
Join us for lectures, hands-on demonstrations, posters, installations, and musical performances of recent CCRMA research including digital signal processing, data-driven research in music cognition, and a musical instrument petting zoo.
Past CCRMA Open House websites: 2008, 2009, 2010, 2012, 2013, 2016, 2017.
Facilities on display
Neuromusic Lab: EEG, motion capture, brain science...Listening Room: multi-channel surround sound research
Max Lab: maker / physical computing / fabrication / digital musical instrument design
Hands-on CCRMA history museum
JOS Lab: DSP research
CCRMA Stage: music and lectures
Studios D+E: Sound installations
Recording Studio: Sound games and studio tours
Schedule Highlights
Keynote lecture by Yann Orlarey
Evening concert by Mari Kimura
List of Exhibits (lectures, performances, demos, posters...)
Tombeau Électronique
Yuval Adler
Spatial Sound art installation, homage to Iannis Xenakis.
Installation. Location: Listening Room (128), Time: all day
Timbre, texture, space and musical style: Rome’s Chiesa di Sant’Aniceto and its music.
Jonathan Berger, Jonathan Abel, Talya Berger, Elliot Kermit Canfield-Dafilou
The spaces in which music is performed and experienced typically, and often profoundly, affect the resultant timbre. However, relatively little research exists on the considerable effect of architectural acoustics on musical timbre. Still fewer studies consider the relationship between musical style and performance practices, and architecture.
In this paper, we consider the role of architectural acoustics in the evolution of musical style by considering a particular church in Rome - the Chiesa Sant’Aniceto in Palazzo Altemps - an early seventeenth century church with rich and complex acoustical qualities, and the music written specifically for that church - a codex that has recently come to light after a long period of absence.
Working with this unique corpus of source materials and computational models of the acoustics of that church, we demonstrate how the music’s textures and character (including rate of harmonic change, registral distribution, and degree of dissonance) capitalize on the acoustical properties of the space ensuring a balance of clarity and acoustic blur.
15-minute Presentation/Lecture. Location: Stage (317), Time: 10:00-10:20
Timbre, texture, space and musical style: Sacred spaces in 17th century Rome and the music in those spaces
Jonathan Berger, Jonathan Abel, Talya Berger, Elliot Kermit Canfield-Dafilou
We offer recorded music from the Altemps Codex recorded in near-anachoic space and comparative acoustic models of two churches in Rome
Installation. Location: Studio E (320), Time: all day
All About the MP3 and MPEG Perceptual Audio Coders
Marina Bosi
Who would have guessed twenty years ago or so that teenagers and everybody else would be clamoring for devices with MP3/AAC (MPEG Layer III/MPEG Advanced Audio Coding) perceptual audio coders that fit into their pockets? As perceptual audio coders become more and more part of our daily lives, residing within mobile devices, DVDs, broad/webcasting, electronic distribution of music, etc., a natural question to ask is: what made this possible and where is this going? This open class, presented by one of the early developers who helped advance the field of perceptual audio coding, will provide a tutorial on the MPEG technology employed in perceptual audio coding and a brief overview of past and current standard development. https://ccrma.stanford.edu/courses/422-winter-2017
Open Class Session: Music 422 "Perceptual Audio Coding". Location: Classroom (217), Time: 2:30-4:20
Arcontinuo: the Instrument of Change
Rodrigo F. Cadiz, Alvaro Sylleros, Patricio de la Cuadra
I will present a brief demo of some of the musical possibilities of the Arcontinuo, an electronic musical instrument designed from a perspective based in the study of their potential users and their interaction with existing musical interfaces. Arcontinuo incorporates natural and ergonomic human gestures, allowing the musician to engage with the instrument and as a result, enhance the connection with the audience. Arcontinuo attempts to expand the notion of what a musical gesture is, aiming for a better and more meaningful performance.
Demo. Location: Stage (317), Time: Concert (2:30-3:30) piece 3/6
Talk to the Hand!
Doga Cavdir
A new musical interface for voice processing. It is a wearable hand device which talks back to you in two different processing modes. Three FSR sensors on the top surface change the parameters of the device, while 4 switches allow the user to choose modes. For more information, please see the video: https://youtu.be/TGxajqDtfxA
Pettable Instrument. Location: MaxLab (201), Time: all day
Online Teaching of Online Jamming Technology
Chris Chafe
Today's vast amount of streaming and video conferencing on the Internet lacks one aspect of musical fun and that's what this course is about: high-quality, near-synchronous musical collaboration. Under the right conditions, the Internet can be used for ultra-low-latency, uncompressed sound transmission. The course teaches open-source (free) techniques for setting up city-to-city studio-to-studio audio links. Distributed rehearsing, production and split ensemble concerts are the goal. Setting up such links and debugging them requires knowledge of network protocols, network audio issues, mics, monitors and some ear training.
15-minute Presentation/Lecture. Location: Stage (317), Time: 4:40-5:00
Listening to Minimalism: An EEG Study of How We Process Repetition
Tysen Dauer, Barbara Nerness, Takako Fujioka
Many American minimalist compositions from the 1960s and early 1970s by composers such as Julius Eastman, Meredith Monk, and Philip Glass featured extreme amounts of repetition of musical figures with simple melodic and rhythmic content. Typically, from a listener’s perspective, the number of repetitions for any given musical figure was unpredictable. This study examines how such unpredictable repetition is processed in the brain preattentively by analyzing Event-Related Potentials (ERPs) recorded using electroencephalography (EEG). In particular, we are investigating acoustic change detection and attentional states over the course of synthesized, minimalist-like stimuli to better understand the neural and cognitive mechanisms underlying listening experiences of early American minimalism.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Demonstration of EEG Capping Procedure
Tysen Dauer, Emily Graber, Irán Román
In the Neuromusic Lab, neural activity is recorded by electroencephalography (EEG). This short video shows the preparation that must be done before EEG data is gathered. This includes fitting a cap, impedance matching each electrode, and running the data acquisition system.
Video Demonstration (~4 minute loop). Location: Neuromusic Lab (103), Time: all day
The Extended Guitar
Noah Fram
Acoustic intstruments are capable of producing wonderfully expressive sound, but are limited by their own physicality. The guitar is no exception. Although extended technique inspired by flamenco guitarists makes use of both the guitar's strings and body to access the instrument's latent potential as a percussion instrument, and alternate tunings allow guitarists to access a broad range of harmonies and resonances, the sound still emanates from a single, discrete source: the guitar body. The Extended Guitar is the first in a series of explorations of ways to spatialize the actual sound production of acoustic instruments, rather than locating the discrete source in space. It uses a set of 16 FFT-based filters, implemented in ChucK, to spread the sound of a single guitar across a three-dimensional speaker array. The music in this performance is "Deep at Night" by Alex de Grassi.
Live Musical Performance (~6 minutes). Location: Stage (317), Time: Concert (2:30-3:30) piece 5/6
Music Instrument Detection Using LSTMS and the NSynth Dataset
Brandi Frisbie
Music instrument recognition is an important part of Music Information Retrieval. Instrument detection could lead to better music tagging in recommendation systems, create better scoring for automatic music transcription tools, or even predict instrumentation of a piece of music based on a small sample. Most music instrument recognition research to date uses sound separation or other classification techniques. The recent release of the NSynth dataset, however, potentially constitutes a major breakthrough that can help mature the instrument recognition field. In this paper, I propose a framework for music instrument detection using LSTMs and the NSynth dataset. The NSynth dataset (over 280,000 samples from eleven different instruments) will be used as training data and IRMAS (over 2,800 excerpts) will be used for testing. After careful evaluation of the model, future work will involve expanding the datasets to include more instruments and further tuning the model to best fit the data.
Poster/Demo. Location: Ballroom (216), Time: all day
Stimuli Jukebox
Takako Fujioka, Keith Cross, Tysen Dauer, Elena Georgieva, Emily Graber, Madeline Huberth, Kunwoo Kim, Michael Mariscal, Barbara Nerness, Trang Nguyen, Irán Román, Ben Strauber, Auriel Washburn
Many experiments happen in the Neuromusic Lab each year, each involving some kind of musical stimuli (sounds that the subjects listen to) and/or musical task (music that the experiment asks the subjects to perform). Throughout the day we will play an assortment of such sounds, to give some of the sonic flavor of the experiments that take place here.
Sounds to Hear. Location: Neuromusic Lab (103), Time: all day
Hybrid Encoding-Decoding of Stimulus Features and Cortical Responses During Natural Music Listening
Nick Gang, Blair Kaneshiro, Jonathan Berger, Jacek Dmochowski
We measure correlations between brain responses and musical features in a statistically optimal fashion. Using an extensive dataset of electroencephalographic (EEG) responses to natural music stimuli, we employ Canonical Correlation Analysis to identify spatial EEG components that track temporal stimulus components. We found multiple statistically significant dimensions of stimulus-response correlation (SRC) for all songs studied. Temporal filters for stimulus features highlight harmonics and subharmonics of each song's beat, while EEG spatial filters present anatomically plausible, symmetric frontocentral topographies across songs. Our results suggest that different neural circuits encode different temporal hierarchies of natural music.
Poster/Demo. Location: Ballroom (216), Time: all day
Wizard Tones
Andreas Garcia
Wizard Tones is synthesizer software for a MIDI keyboard that uses additive synthesis in unconventional ways to create a continuous spectrum of interesting voices and effects controlled by the two pitch wheels on the keyboard.
Pettable Instrument. Location: MaxLab (201), Time: all day
Modeling Neural Responses to Speech and Music
Emily Graber, Malcolm Slaney
Temporal response functions (TRFs) can be used to predict a neural response from a stimulus, or the reverse. Because behavioral and neuroimaging studies have found that speech and song are perceived and processed differently, we aimed to demonstrate that the neural responses associated with listening to speech and music create distinct TRFs with classifiable predictions. A continuum of stimuli ranging from speech to music were created and tested in order to study TRFs and their predictions for various stimuli.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
VR Assessment of Surround Virtualization for Headphone
Mark Hertensteiner
An assessment tool in the form of a virtual reality game, used for quantitatively analyzing and comparing virtualization products that reduce 7.1.4 multichannel spatial audio to two-channel binaural audio for headphones. Player movements and reaction times are tracked while shooting targets that emit bursts of filtered broadband noise, generated at regular interval patterns at random locations in 3D space. Generated data supports conclusions regarding gameplay advantages of different virtualization products over one another. Designed at Dolby, adapted from Final Project for Music 257: Neuroplasticity and Musical Gaming (with project partner Victoria Grace).
Demo. Location: Grand Central Station (211), Time: all day
CCRMAC-1
Mark Hertensteiner, Nick Gang, Ifueko Igbinedion
A perceptual audio coder is presented as submitted for Music course 422 final project. This coder leverages psychoacoustic properties to compress audio files at a given compression ratio using methods employed by traditional perceptual audio coders. In addition to standard components developed over class assignments, the presented coder features spectral band replication, block switching, and variable bit rate to achieve data rates as low as 48 kb/s/ch with minimal degradation in perceptual quality. Spectral band replication refers to transposition and filtering of low frequency content to replace high frequencies. Block switching involves changing analysis/synthesis window sizes based on the input to better reproduce transients. Encoded/decoded audio samples are provided for listening.
Poster/Demo. Location: Grand Central Station (211), Time: all day
Performance Monitoring of Self and Other in a Turn-Taking Piano Duet: A Dual-EEG Study
Madeline Huberth, Tysen Dauer, Chryssie Nanou, Irán Román, Nick Gang, Wisam Reid, Matthew Wright, Takako Fujioka
The present study investigated how outcome expectation and empathy interact during a turn-taking piano duet task, using simultaneous EEG recording. During the performances, one note in each player's part was altered in pitch to elicit changes in neural responses. Pianists memorized and performed pieces containing either a similar or dissimilar sequence as their partner. For additional blocks, pianists also played both sequence types with an audio-only computer partner. Our results suggest greater online monitoring of self- compared to other-produced actions during turn-taking joint action, and that highly-empathetic musicians during joint performance could use a strategy to suppress exclusive focus on self-monitoring.
Poster/Demo. Location: Neuromusic Lab (103), Time: all day
Hypnosphere
Christopher Jette, Joseph Barker
Hypnosphere is an immersive sound and light instrument. A single person is seated in a chair and a sphere is lowered over the head of the participant. The sphere conveys a sound and light presentation through 400 LED’s and 9 channels of audio. The synchronous audio and light content is produced by a generative algorithm and processes material from the surrounding environment.
Pettable Instrument. Location: MaxLab (201), Time: all day
Pachinko-Inspired MIDI Random Sample Player
Jay Kadis
Pachinko-like MIDI controller to trigger up to 16 channels of randomized sound file playback.
Installation. Location: Studio D (221), Time: all day
CCRMA Studio Tour and Demo Reel of CCRMA Recordings
Jay Kadis
Drop-in studio tours and music listening. Location: Control Room (127), Time: all day
Music and Evolution: Abstract Narrative Game
Kunwoo Kim
Music and Evolution: from Grunts to Songs is an abstract narrative game designed as a final project in Music 256A. Beginning as an abstract ape in an abstract space called the "White Womb", you interact with other beings to acquire evolutionary traits related to music. After you acquire the final trait to become a human being, you are ‘born’ into the real world of symbolic mind. (More information: www.kunwookimm.com)
Poster/Demo. Location: Seminar Room (315), Time: all day
Evening Concert
Mari Kimura
Evening Concert. Location: Stage (317), Time: 7:3O
Chunkoder
Kyle Laviana
The Chunkoder looks to incorporate rhythmic oscillation into an ambient MIDI vocoder. Inspired by the synthetic "chunkiness" of songs such as Daft Punk & The Weekend’s I Feel it Coming, I wanted to create a vocoder that was capable of autonomously maintaining an intense musical pulse throughout a piece. Learn more and see source code at https://ccrma.stanford.edu/~klaviana/220a/fp.html
Pettable Instrument. Location: MaxLab (201), Time: all day
Staging the GRAIL
Fernando Lopez-Lezcano, Christopher Jette
Over the past year the sound diffusion system in our small concert Hall, the Stage, was upgraded from our previous 16.8 system to a full 56.8 full surround array with high spatial resolution (the new array can properly render up to 6th order Ambisonics soundfields). This presentation will outline the acoustical, mechanical, electrical and electronic design of the system, as well as the support software needed for daily operation. Challenges included keeping the existing system (16.8 plus digital mixer) unchanged due to the varied uses of the space, and adding a higher level layer that could control the whole array.
Stage diffusion system demo. Location: Stage (317), Time: 3:40-4:00
The SpHEAR project update: building the Octathingy
Fernando Lopez-Lezcano
The SpHEAR project's goal is to create a family of open GPL/CC licensed soundfield microphone arrays that can be fabricated using the current generation of low cost extruded filament 3D printers. The Octathingy is an 8 capsule soundfield microphone array design by Eric Benjamin and Aaron Heller that can capture better 1st order Ambisonics components at high frequencies, and four of the five second order components of a soundfield. A working prototype has been operational for a while, we will discuss the strategies used so far for deriving a working calibration for it and play example sounds recorded in the field, recording studio and small concert hall.
Demo. Location: Stage (317), Time: 4:00-4:20
Y Sonó Como Arpa Vieja
Fernando Lopez-Lezcano
The story starts with the remains of an upright piano, no keys or mechanism, and a second soundboard, strings mostly intact, saved from recycling. They spend a Winter outdoors, alternatively being baked by the sun and soaked by invigorating Californian rain, getting all tuned and ready for a studio session, temporary housing for homeless lizards and other critters...
Spring comes and they move to our recording studio together with two 3D printed Ambisonics microphones (first and second order) from my SpHEAR project. Other exotic microphones I ended up not using (2 B&K, 2 AKG) are also there. Hours of high quality recordings while having fun with all sorts of percussion utensils and found objects. Hit, bow, pluck, scrape, you name it.
SuperCollider processes run amock and arrange 13+Gbytes of samples into structures. Second order Ambisonics full surround images are combined with 5th order pinpoint beam forming and panning. There is almost no added processing, all the reverberation is just open strings happily chiming in to their excited brothers and sisters.
"Y Sonó Como Arpa Vieja" is translated literally as "and it sounded like an old harp" but is also an idiom in Spanish that is difficult to convey, "sonó" means "you sounded", but in that context, and referring to a person, the phrase also means "he/she is screwed badly". How badly? As bad as the sound of an old harp... which in this case, ironically, sounds really good.
Music Playback 5th/2nd order Ambisonics (~11 minutes). Location: Stage (317), Time: Concert (2:30-3:30) piece 6/6
Jouska for violin, live electronics, and mμgic sensor glove
Chris Lortie, Mari Kimura
"Jouska" for violin, live electronics, and mμgic sensor glove.The word Jouska comes from the Dictionary of Obscure Sorrows, a compendium of invented words written by John Koenig that try to “give a name to emotions we all might experience but don’t yet have a word for.” Koenig defines Jouska as “a hypothetical conversation that you compulsively play out in your head […] which serves as a kind of psychological batting cage where you can connect more deeply with people than in the small ball of everyday life, which is a frustratingly cautious game of change-up pitches, sacrifice bunts, and intentional walks.”
Live Musical Performance. Location: Stage (317), Time: evening concert (7:30pm)
NMED-T: A Tempo-Focused Dataset of Cortical and Behavioral Responses to Naturalistic Music
Steven Losorelli, Duc Nguyen, Jacek Dmochowski, Blair Kaneshiro
We introduce the Naturalistic Music EEG Dataset---Tempo (NMED-T), an open dataset of electrophysiological and behavioral responses collected from 20 participants who each heard 10 complete commercially available musical works. Songs span various genres and tempos, and contain electronically produced beats in duple meter. Preprocessed and aggregated responses include dense-array EEG and sensorimotor synchronization (tapping) responses and behavioral ratings of songs. These data, along with illustrative analysis code, are published in Matlab format. Raw EEG and tapping data are also made available. This dataset facilitates reproducible research in neuroscience and cognitive MIR, and points to several avenues for future studies on human processing of naturalistic music.
Poster/Demo. Location: Ballroom (216), Time: all day
The Chanforgnophone
Romain Michon
The Chanforgnophone combines acoustical (strings, springs, etc.) and physically-informed virtual elements (virtual room, metal plates physical models, etc.) to make sound. Its acoustical elements are mapped to their digital counterparts allowing for the creation of complex feedback effects between the virtual and the physical world. Various built-in sensors can be used to control its (unpredictable) behavior. The Chanforgnophone is big, heavy, and highly unstable, come play with it at your own risk!
Pettable Instrument. Location: MaxLab (201), Time: all day
The Faust Physical Modeling Toolkit
Romain Michon, Sara R. Martin
We present two tools facilitating the design of physical models of musical instruments in the Faust programming language: the Faust Physical Modeling Library and mesh2faust. The Faust Physical Modeling Library allows for the combination of various musical instruments parts (e.g., mouthpieces, tubes, strings, bodies, etc.) in a modular way. mesh2faust can be used to convert a 3D graphical object (e.g., CAD file) into a physical model usable with the Faust Physical Modeling Library. After giving technical details, various use examples will be provided.
10-minute Presentation/Lecture. Location: Stage (317), Time: 10:30-10:40
faust2smartkeyb: Designing Mobile Device Based Musical Instruments With Faust
Romain Michon, Chris Chafe, Julius O. Smith III, Ge Wang, Matthew Wright
faust2smartkeyb is a tool to generate iOS and Android applications for real-time musical performance using the Faust programming language. This system features polyphony, MIDI and OSC support, and built-in sensors mapping. It also facilitates the design of touch screen interfaces focusing on skills transfer and on the control of physical models of musical instruments. After giving technical details, various use examples will be provided.
10-minute Presentation/Lecture. Location: Stage (317), Time: 10:20-10:30
faust2api: a Comprehensive DSP Engine Generator
Romain Michon, Julius O. Smith III, Stéphane Letz, Chris Chafe, Yann Orlarey
We present faust2api, a tool to generate custom DSP engines for Android and iOS using the Faust programming language. Faust DSP objects can easily be turned into MIDI-controllable polyphonic synthesizers or audio effects with built-in sensors support, etc. The various elements of the DSP engine can be accessed through a high-level API, made uniform across platforms and languages. Technical details on the implementation of this system as well as an evaluation of its various features will be provided.
Poster/Demo. Location: MaxLab (201), Time: all day
Augmented Mobile Devices
Romain Michon
A series of musical instruments based on augmented mobile devices are presented. Beside the Pipohone which is some kind of “smartphone based kazoo,” more advanced instruments such as the PlateAxe, Nuance, and hybrid modular percussions will be showcased.
Pettable Instrument. Location: MaxLab (201), Time: all day
Classification of Brainstem Auditory Evoked Responses to Music and Speech Sounds
Gabriella Musacchia, Steven Losorelli, Blair Kaneshiro, Nikolas Blevins, Matthew Fitzgerald
The ability to correctly identify sounds is fundamental for music and speech perception, but can be challenging for individuals who receive altered acoustic inputs (e.g., cochlear implant users). As a step toward objectively modeling sound perception in such listeners, we classify auditory brainstem evoked potentials of normal hearing individuals, elicited in response to stimuli of major functional categories -- three musical notes produced by different instruments and three consonant-vowel phonemes, all with a fundamental frequency of 100Hz. Classification accuracy was statistically significant (mean 76.59%, compared to chance level 16.67%). Highest classifier confusion was observed between consonant-vowel stimuli sharing identical vowel components. Our results, combined with ubiquity of ABR in clinical environments and ease of processing ABR data, suggest this method has promise in clinical use and for studying auditory processing.
Poster/Demo. Location: Ballroom (216), Time: all day
Resist
Barbara Nerness
Resist is an audiovisual performance investigating surveillance, presence, and vulnerability of the body using found footage, MaxMSP, and voice. It utilizes helicopter surveillance footage from the 2015 Baltimore protests, which the FBI secretly took. Although you cannot see individual faces, those on the ground were watched closely in case a stronger police presence needed to be mobilized; the mayor activated the Maryland National Guard, although they were not used.
Live Musical Performance (~6 minutes). Location: Stage (317), Time: Concert (2:30-3:30) piece 2/6
A GUI-Based MATLAB Tool for Auditory Experiment Design and Creation
Duc T. Nguyen, Blair Kaneshiro
We present AudExpCreator, a GUI-based Matlab tool for designing and creating auditory experiments. AudExpCreator allows users to generate auditory experiments that run on Matlab's Psychophysics Toolbox without having to write any code; rather, users simply follow instructions in GUIs to specify desired design parameters. The software comprises five auditory study types, including behavioral studies and integration with EEG and physiological response collection systems. Advanced features permit more complicated experimental designs as well as maintenance and update of previously created experiments. AudExpCreator alleviates programming barriers while providing a free, open-source alternative to commercial experimental design software.
Poster/Demo. Location: Ballroom (216), Time: all day
Faust: the roots of evil
Yann Orlarey
Faust is a well-known and well-loved programming language in this house. CCRMA is one of the main contributors to the project, notably through extensive libraries thanks to Julius Smith and Romain Michon, but also architecture targets, notably for smartphones, and much more. But in this presentation, I would like to return to the roots of Faust and in particular to some elegant formalisms that inspired its design as well as some projects that preceded it.
I will talk in particular about lambda-calculus, which is of course, the basis of functional programming, but whose "philosophical" dimension is not emphasized enough in my opinion. I will return in particular to the key notion of abstraction of lambda-calculus and how it can be used as a basis for the design of new programming languages. I will also present two examples of non-textual programming languages based on lambda-calculus where the concept of abstraction has a central role.
Keynote Lecture. Location: Stage (317), Time: 1:30-2:20
An Analysis of User Behavior in Co-Curation of Music Through Collaborative Playlists
So Yeon Park, Blair Kaneshiro
Collaborative music consumption behavior has morphed drastically with the availability of customized recommendations and online platforms for co-creating playlists. In this pilot study, we find that not only have the practices of collaborative curation changed, but the emotions associated with the songs and playlists have also been affected. Considering users’ innate desires with respect to social factors and implications is crucial to developing music technologies for today. Further investigation is needed to gain more nuanced understandings of the habits and emotions of today’s collaborative music curators.
Poster/Demo. Location: Ballroom (216), Time: all day
Use your JUCE card to escape UI layout jail
Nick Porcaro, Julius O. Smith III, Pat Scandalis
JUCE https://github.com/WeAreROLI/JUCE has several classes which implement CSS grid layout, https://css-tricks.com/snippets/css/complete-guide-grid, a standard used in modern "responsive" web page design. Using these classes makes it easier to create cross-platform, multi-form factor UI layouts, mitigating many of the limitations of graphical interface builders and proprietary automatic layout systems. The author, who is the primary developer of the GeoShred musical instrument app, http://www.moforte.com/geoShred, will present a new string resonator/droner app created with these techniques.
15-minute Presentation/Lecture. Location: Stage (317), Time: 4:20-4:40
Change Your Tune: A Systematic Exploration of Tonic Shift in Carnatic Music
Vidya Rangasayee
In Carnatic (South Indian Classical) music, Ragas (similar to western scales) can be produced by a change of the Adhara Sruthi i.e., tonic. This can be performed on Melakartha ragas (full scales) as well as Janya or derivative ragas. Complete tonic shifts is a well-known technique employed by several musicians in order to bring a change in mood in a piece. Here we present TonicTunes - a system that systematically explores all possible tonic shifts of a given scale. It uses decision trees to identify valid tonic shifts based on established rules of Carnatic Music. The system also allows users to enter any arbitrary phrase in a given raga, to generate equivalent phrases in other ragas that can be produced by a simple tonic shift. We further explore the realm of partial tonic shifts where, by omitting notes during improvisation, we can explore related ragas which was hitherto unexplored in conventional graha bedham.
Poster/Demo. Location: Ballroom (216), Time: all day
An “Infinite” Sustain Effect Designed for Live Guitar Performance
Mark Rau, Orchisama Das
An audio effect to extend the sustain of a musical note in real-time is implemented on a fixed point, standalone processor. Onset detection is used to look for new musical notes, and once they decay to steady state the audio is looped indefinitely until a new note onset occurs. To properly loop the audio, pitch detection is performed to extract one period and the new output buffer is written in a phase aligned manner.
Poster/Demo. Location: Grad Workspace (305), Time: all day
Relating the Cohesiveness of Auditory Hair Bundles in Mammals to their Function
Wisam Reid, Anthony J. Ricci, Daibhid O Maoileidigh
Hair bundles are comprised of a set of stereocilia connected by links that ensure their cohesion in vestibular systems and nonmammalian auditory organs. In contrast, bundles from the mammalian cochlea exhibit weak coherence in response to experimental stimuli. Although coherence between stereocilia determines how hair cells transduce stimuli, we do not fully understand the relationship between a bundle’s cohesiveness and its function.
To connect a hair bundle’s structure with its response to stimulation, we construct a mathematical model of hair-bundle mechanics describing inner and outer hair cells. The model consists of a set of stereocilia connected by tip links and top connectors and includes the viscosity of the surrounding fluid. We vary the geometry and material properties of the bundle and analyze its responses to sinusoidal and step stimuli. These simulations are compared to the observed displacements of real hair bundles in response to experimental stimuli applied using a fluid jet or a stiff probe.
There are several consequences arising from a lack of bundle cohesion. We find that the measured stiffness and drag of a bundle depends on how the stimulus is applied and that a bundle may not be accurately described by a single spring constant and drag coefficient. Moreover, in response to a stimulus, tip links between different stereocilia do not extend the same amount and their extensions are not in phase with one another.
We conclude that the responses of a weakly coupled hair bundle to stimuli are considerably more complex than those of a tightly coupled bundle. To more completely characterize the exact role of cohesion, intrinsic noise, mechanotransduction channels, and adaptation need to be taken into account. Nonetheless, our reduced description reveals that bundle cohesion strongly affects how a hair cell detects external signals.
Poster/Demo. Location: Grad Workspace (305), Time: all day
Flarinet
Zachary Rotholz, Zachary Rotholz, Sasha Leitman
The Flarinet is an easy to play, easy to 3d-print mouthpiece that transforms standard PVC pipes into a modular wind instruments. Children and adults alike can tinker with music, discovering fundamental acoustical principles and designing personalized instruments.
Pettable Instrument. Location: MaxLab (201), Time: all day
HFM-1MB - HyperAugmented FlameGuitar (Flamenco Guitar) for 1MB (One Man Band)
Carlos A. Sánchez García-Saavedra
Towards a HyperAugmented Flamenco Guitar for a One Man Band. Meta-Project and Concept that naturally encompasses a plethora of musical, engineering, tech, devel, and building projects developed through a lifetime. The goal is achieve a series of tools (software/hardware/electronics), methodologies, processes and helpers that allow a person with a guitar to be a One Man Band, and trying to minimize the technical thinking in favor of musical flow, performance and creative thinking. "More Music (playing and creating), Less Tech (clicking, typing, touching)". Portability and Lightness, Ease and Simplicity, Freedom and Openness, Privacy and Security, Cheap and portable, DIY.
Poster. Location: Carlos Office (207), Time: all day
Multi Dimensional Controllers and Modeled Synthesis
Pat Scandalis, Julius O. Smith III, Nick Porcaro, Jordan Rudess
Last month the MIDI Manufactures Association (MMA) ratified the MPE (Multi Dimensional Polyphonic Expression) specification. Multi Dimensional Controllers are becoming more common. Physically modeled musical instruments, with expressive parameters, are a well suited to be paired with this new generation of controllers. A brief overview of the current state of controllers and physical modeling synths will be given along with demonstrations of what is possible.
Demo. Location: Stage (317), Time: 11:40-12:20
Intelligent Musical Interface
Kitty Zhengshan Shi, Gautham Mysore
We aim to help people with a minimal background in music theory and audio editing be creative with music creation. We think that constrained interfaces with high level musical input from the user on creative decisions can help with this. We present two systems for this purpose - a loop creation tool and a medley creation assistant. Both interfaces use music information retrieval to automatically perform tasks that tend to be challenging for novices. Although the motivation is to help novices, we think that this can also help save time for expert users, so we have an additional layer of control for such users.
15-minute Presentation/Lecture. Location: Stage (317), Time: 10:40-11:00
Aeolian Synth
Juan Sierra
Aeolian Controller is a Simple polyphonic wind music controller that allows the user to play any MIDI compatible generator. Just play the small buttons and blow into the pressure sensor.
A quick demo If interested can be found here: https://www.youtube.com/watch?v=SCQtkaEi10I&t=2s
The whole project can be found here: https://github.com/jdsierral/250-HW2
Pettable Instrument. Location: MaxLab (201), Time: all day
Stringifier
Juan Sierra
This wood stick holds two piezo microphones (one at each end), when the stick is hit in the middle a script calculates the time difference between front waves reaching each microphone and calculates the pitch from there. This allows the performer to play a virtual string algorithm (Karplus-strong algorithm) whose length is determined by the position of the struck and whose envelope is derived from the signals of the piezos themselves.
A short demo can be found here: https://www.youtube.com/watch?v=cWdkCeMEX_c
And the full project over here: https://github.com/jdsierral/250-HW3
Pettable Instrument. Location: MaxLab (201), Time: all day
Javascript Audio GUIs
Beau Silver
Javascript is touted as the preferred language for modern GUI applications. But can it really meet the needs of audio and mixer applications? I'll share my experience from Universal Audio while working on the GUI application for OX, our premium reactive load box and guitar recording system. We embedded javascript into native applications to control knobs, faders, meters, graphs and other common audio and mixer UI elements. (https://www.uaudio.com/ox)
15-minute Presentation/Lecture. Location: Stage (317), Time: 11:00-11:20
Annex
Anna Tskhovrebov
This piece is half of Anna Tskhovrebov's 2017 EP antibody/annex, released under the moniker Viewfinder. The composition aims to explore the boundaries of what can be considered 'dance music': it borrows a rigid rhythmic structure from forms like techno, with layers of processed field recordings and microtonal synth work painting soundscapes that blur the lines between synthetic and organic/past and present/perceived and imagined. Anna hopes to continue capturing and evolving unexpected timbres and inexplicable emotions in the context of rhythmic structures that interrogate our linear perception of time.
Music Playback (6:24). Location: Stage (317), Time: Concert (2:30-3:30) piece 4/6
Music, Computing, Design
Ge Wang
This presentation addresses design as an artistic and intellectual discipline, as well as its relationship to music and technology.
15-minute Presentation/Lecture. Location: Stage (317), Time: 11:20-11:40
Feedback Network Ensemble
Matthew Wright, Alex Chechile, Mark Hertensteiner, Christopher Jette
Live Musical Performance (~12 minutes). Location: Stage (317), Time: Concert (2:30-3:30) piece 1/6
Tunnel Race + Fly swatting
Shenli Yuan, Ziheng Chen
Fly swatting: an audio-based two-person interactive game. Tunnel race: a video game with audio effect and hardware installation.
Demo. Location: Recording Studio (124), Time: 10am-1pm
Real-Time Wave Digital Simulation of Cascaded Vacuum Tube Amplifiers using Modified Block-Wise Method
Jingjie Zhang, Julius O. Smith III
Vacuum tube amplifiers, known for their acclaimed distortion characteristics, are still widely used in Hi-Fi audio devices. However, bulky, fragile and power-consuming vacuum tube devices have also motivated much research on digital emulation of vacuum tube amplifier behaviors. Recent studies on Wave Digital Filters (WDF) have made possible the modeling of multi-stage vacuum tube amplifiers within single WDF SPQR trees. Our research combines the latest progress on WDF with the modified block-wise method to reduce the overall computational complexity of modeling cascaded vacuum tube amplifiers by decomposing the whole circuit into several small stages containing only two adjacent triodes.Certain performance optimization methods are compared and applied in the eventual real-time implementation.
Poster/Demo. Location: Grad Workspace (305), Time: all day