Blue Lotus 


Blue Lotus is a DJ tool created with machine learning in ChucK. I used the vocal sample from “Renaissance” by Cristobal Tapia de Veer, featured in the show “White Lotus”. I divided the vocal sample into 3 .wav files–each section is what I deemed different enough in vocal quality. I used these files to train 3 models to synthesize. The intention behind this tool is to be a new expressive tool for DJing or production. 

The milestones leading up to this final version can be found here

Phase 3 (final) updates:

% chuck


Performance work-flow:

In the video, I am highlighting the potential of this tool to augment live mixing/DJing and other performance. I am DJing and live sampling in the traditional sense, and then using the output signal of my DJ software to drive the vocals–but still have control of which synthesized vocal quality I want and when. The jog wheel offers a new way of scratch DJing – totally not scratch DJing, it is controlling the number of frames used in the synthesis, which is I guess very quantized scratching and it is very unlikely you will learn real scratch DJing practicing with this. But it offers interesting results in higher-level control.


Serato SB3-N DJ Controller to ChucK and Serato DJ Pro

Maschine MK3, Maschine Studio


Demo: Blue Lotus | Music & AI | Featured Artist | Live Mix



In Phase 2 I felt the vocal sample I used had too much stuff going on. So I went with the weirdest vocal sample in my library, thought about how I could leverage the controller and the unique qualities of the vocal sample. I spent as much time pre-processing the tracks and prototyping (non-ML, traditional live sampling/DJing) as I did coding to see if the higher level idea was worth it. Based on what I have made, it seems to be a transferable workflow mainly for vocal samples. Most of the performance is just hoping for the best, and I’ve learned the value of setting myself up to be able to hope for the best and get some cohesive result. What didn’t work: I wanted to try using the envelop follower to gate the output synth based on what is going on in the back track, but I ended up allocating more time to making the simple things good and just doing the gating as a human. The main outcome I was going for was something that was useful and musical. So the scope of the project followed that idea. In learning to perform with it I tried to give the ML component as much room in the session as a human would get in a jazz trio. I thought at times it was kind of like a diva going off on crazy solos, but there were times that it gave me really surprising expressive components. In Phase 2 I really did not consider integrating this into my practice. But in Phase 3 I’m entertaining the idea of using it in my DJ sets.



I'd like to thank walks with Mother Nature for help conceptualizing the idea, Cristobal Tapia de Veer for inspiration, Professor Ge Wang for guidance, and Nick Shaheed for help hacking. Additionally, I am using a track called La cle de Temps by N'to played through the DJ software.