Blue Lotus 


Blue Lotus is a DJ tool created with ChucK and Machine Learning. I used the vocal sample from “Renaissance” by Cristobal Tapia de Veer, featured in “White Lotus”. I divided the vocal sample into 3 .wav files–each section is what I deemed different enough in vocal quality. I used these files to train 3 models to synthesize. The intention behind this tool is to be a new expressive tool for DJing or production. 


Phase 3 features:



% chuck



Performance work-flow:

In the video, I am highlighting the potential of this tool to augment live mixing/DJing and other performance. I am DJing and live sampling in the traditional sense, and then using the output signal of my DJ software to drive the vocals–but still have control of which synthesized vocal quality I want and when. The jog wheel offers a new way of scratch DJing – totally not scratch DJing, it is controlling the number of frames used in the synthesis, which is I guess very quantized scratching and it is very unlikely you will learn real scratch DJing practicing with this. But it offers interesting results in higher-level control.


Serato SB3-N DJ Controller to ChucK and Serato DJ Pro

Maschine MK3 to Maschine Studio


300-Word Response

In Phase 2 I felt the vocal sample I used had too much stuff going on. So I went with the weirdest vocal sample in my library, thought about how I could leverage the controller and the unique qualities of the vocal sample. I spent as much time pre-processing the tracks and prototyping (non-ML traditional live sampling/DJing) as I did coding to see if the higher level idea was worth it. Based on what I have made, it seems to be a transferable workflow mainly for vocal samples. Most of the performance is just hoping for the best, and I’ve learned the value of setting myself up to be able to hope for the best and get some cohesive result. What didn’t work: I wanted to try using the envelop follower to gate the output synth based on what is going on in the back track, but I ended up allocating more time to making the simple things good and just doing the gating as a human. The main outcome I was going for was something that was useful and musical. So the scope of the project followed that idea. In learning to perform with it I tried to give the ML aspect as much room in the session as a human would get in a jazz trio. I thought at times it was kind of like a diva going off on crazy solos, but there were times that it gave me really surprising expressive components. In Phase 2 I really did not consider integrating this into my practice. But in Phase 3 I’m entertaining the idea of taking it to the next level: trolling people at Burning Man.





I'd like to thank the streets for teaching me how to have the nerve to do things, Cristobal Tapia de Veer for inspiration, Professor Ge Wang for some ideas, and Nick Shaheed for coding help. Additionally, I am using a track called La cle de Temps by N'to.