Synth-Guitar Mosaic

Final Project: Final Deliverable: The Final Season: Part 3: Part 2: The Final Chapters

(is this an Attack on Titan reference?)

System Description

This is an interactive system which takes a guitar signal as input and implements a mosaic. By pressing any key on my keyboard I can switch between two modes: call-and-response mode and concurrent mode. In call-and-response mode, I can press on my pedal once to start recording a guitar piece via LiSa. When I press it a second time, the program stops recording and plays back the resulting mosaic from the recording. Concurrent mode plays the mosaic and guitar signal simultaneously.



This was the most fun I've had in developing a project. One of the big takeaways that I got here was that I don't really need to have a complicated machine learning setup to create a cool AI system that sounds good. There are literally only two features that I extract from all of my samples: chroma and centroid. I did this because these were the combination that, to me, sounded the best, and that felt the most harmonious with my guitar playing. What I particularly like about the setup I created was the added controllability with the pedal, because that lets me choose for how long I want my mosaic to be and for which parts of my guitar playing. This way, during the response, I can decide to layer on other guitar playing. Switching between the two modes is also really neat, in case I want to hear the mosaic's response. One other improvement from the previous project iteration is that the sample just sound better compared to the daft punk mosaic. The daft punk samples were too cacaphonous in order to make something sound good. The ambient synthesizers used here just sound much better here. I like to think view this project as an exploration and/or a microcosm where both AI and humans interact together to create something greater than the sum of their parts. In this relationship, both are equal. I used to fully (and to a certain smaller extent still) believe that AI's purpose should be to serve humans, but after thinking about it, if AI gains sentience and is able to produce beautiful music and art with humans, maybe it'd be better to treat it as an equal.


I would like to acknowledge Ge and the ChucK team for providing great example code for keyboard input and LiSa, as well as mosaicing. I would also like to acknowledge the YouTuber RemixSample for providing audio for the synthwave samples that are mosaic'd.