type "m" for the menu of pages and
arrow or page keys to advance or rewind
two-finger tap for the menu of pages and
the usual left/right swipe to advance or rewind
Teaching Jacktrip (online)
Online course teaching jacktrip essentials
offered by Kadenze and Stanford Online
3 April, 2018 (6 weeks)
Today's vast amount of streaming and video conferencing on the Internet lacks one aspect of musical fun and that's what this course is about: high-quality, near-synchronous musical collaboration. Under the right conditions, the Internet can be used for ultra-low-latency, uncompressed sound transmission. The course teaches open-source (free) techniques for setting up city-to-city studio-to-studio audio links. Distributed rehearsing, production and split ensemble concerts are the goal. Setting up such links and debugging them requires knowledge of network protocols, network audio issues, mics, monitors and some ear training.
(click below) to open Kadenze course
Internet Acoustics (for network reverb)
Internet Acoustics is the study of sound traveling through the Internet, treating it as an acoustical medium just like air or water. Real-time streaming of sound, something commonplace nowadays, can be exploited for its own "physics" of propagation.
(see) Chris Chafe, "I am Streaming in a Room," Computational Audio, Mark Sandler, ed., Frontiers in Digital Humanities (forthcoming, 2018)
Early study of internet acoustics at CCRMA required the development of a system for low-latency, uncompressed audio streaming over IP. That software evolved into Jacktrip and is shared today as an open-source application widely used for jamming, rehearsing and concerts. But its origin was in an experiment to treat acoustical loops in the internet as sound-producing objects, an idea related to common methods for physical modeling sound synthesis.
(see) Juan-Pablo Cáceres and Chris Chafe, "JackTrip: Under the Hood of an Engine for Network Audio," J. New Music Res. (2010)
Jacktrip provides bi-directional, low-latency, uncompressed audio over IP
Simple stretched string physical model with filtered delay loop (FDL)
Julius O. Smith III, Elementary Digital Waveguide Models for Vibrating Strings
https://ccrma.stanford.edu/~jos/SimpleStrings/Karplus_Strong_Algorithm.html
A Karplus Strong-like algorithm entered the realm of internet acoustics through experimentation between two hosts which produced a distributed algorithm for "plucking the Internet." The algorithm's delay memory was no longer local computer memory (as in the original KS string) but the time of flight across an internet path.
(see) Chris Chafe, Scott Wilson and Daniel Walling, "Physical Model Synthesis with Application to Internet Acoustics," ICASSP (2002)
Network audio loop creates a resonance like a stretched string
Vibrating guitar strings and echoing parallel walls are both modeled with FDL's. For a KS-like guitar string, the loop filter is tuned to be "ringy" (high Q). The parallel wall case is the opposite, typically a very damped (low Q) loop. Once the internet version of the guitar string had proved that time delay could be obtained using the network, it was natural to contemplate implementation of internet reverberators using well-known, FDL-based reverberation modeling.
(see) Chris Chafe, "Distributed Internet Reverberation for Audio Collaboration,"
AES: 24th Int'l Conf.: Multichannel Audio (2003)
Basic reverberator with 1 FDL, but imagine many parallel FDL's
each slightly different in length
Basic network reverberator showing 1 FDL
In a digitally-connected telecommunication world, rooms of this kind enclose remotely collaborating musicians in their own reverberated sound. The ambience which results is the product of the acoustical loop which creates room-like resonances.
These are synthesized acoustical spaces engineered to resemble actual rooms and distinct from other kinds of online rooms where "room" is used metaphorically for gatherings of users participating in teleconference or chat applications.
Human Synchronization
Studies of rhythmic accuracy and temporal separation
Tapping to a metronome (one-way rhythm shown here)
is different from two humans in a two-way coupled rhythm.
Interlocking rhythm used to test the effect of temporal separation.
Subjects = students and staff at Stanford (paired randomly)
Task = play rhythm accurately, keep an even tempo (no strategies given)
Experimental setup.
Sound
(3ms delay each,
metronome cue = mm94)
Delays tested.
Deceleration from longer delay
but where does it start to cause
trouble?
Sound
(78ms delay each,
metronome cue = mm90)
Human tempo acceleration and coupled oscillator model.
J.P. Cáceres, Synchronization in Rhythmic Performance with Delay, PhD Thesis, 2013
(click below) to run a script which studies interaction synchrony by
tapping to a beat
tap along with adaptive algorithm at 0ms delay (accelerates)
primitive algorithm vs. human tapper
What's a human tapper doing?
A participant is given 5 versions to tap along with:
tap along with adaptive algorithm
tap along with straight metronome
tap along with sinusoidal fluctuating metronome
tap along with adaptive at 0ms delay
tap along with adaptive at 40ms delay
Questions:
when do they sense that their "partner" is listening?
do they learn to anticipate regular metronome fluctuations?
can we reproduce the effects of two-way delay?
First (buggy) attempt using Web Audio & Mechanical Turk Meltdown with long delays between a synthetic tapper (blue) and a human tapper (red)
Post-trial comment: "This was really fun to do. That D beat was really crazy to follow. This would make a cool game. Thank you. "
adaptive algorithm coupled with human tapper, 78 msec delay
0 msec RTT delay 6 msec RTT delay 40 msec RTT delay adaptive algorithm coupled with human tapper