I experimented with 15 different configurations, where I had the base features from the starter code and some additional features: Chroma, SFM, ZeroX, and Kurtosis. I tried testing out the base code, Kurtosis, Chroma, SFM, ZeroX, Chroma + Kurtosis, Chroma + SFM, Chroma + ZeroX, Kurtosis + SFM, Kurtosis + ZeroX, SFM + ZeroX, Chroma + Kurtosis + SFM, Chroma + Kurtosis + ZeroX, SFM + ZeroX + Kurtosis, and Chroms + SFM + ZeroX + Kurtosis. The base model had a performance of 0.432. The model with the best perfomance was the C + K + Z model with 0.491 accuracy, which is a substantial jump from the base model! Moreover, any model with ZeroX features resulted in suboptimal performance, averaging at around 0.1 accuracy.
So far I've tried to experiment with the whistling and change the features from Phase 1. I found that it was hard to get the synth to match the pitch of my whistling using the best features from Phase 1. I experimented with removing some features and I found that only keeping Chroma + MSCC + Centroid allowed me to have the best pitch-matching to my whistling, although I still believe there is room for improvement there. Moreover, I plan on using more audio from Daft Punk's Discovery album, since right now I am using just Digital Love, the third track in the album.
This system uses Daft Punk samples to respond to what is being played currently on an audio file. The samples were extracted using Audacity's beat analyzer. It also uses an algorithm that I created to fill in large empty spaces where Audacity coulnd't detect beats.
What if the people who were sampled by Daft Punk sampled them back? In my phase 3 I adjusted some more features and created more code to extract features from all of Daft Punk's Discovery album, and then used George Duke's "I love you more" as input into the mosaic since Digital Love samples its riff from that song. This means that the resulting mosaic would be George Duke remixing Daft Punk. Before I extracted features, however, I ran all of the different pieces through Audacity's beat finder analyzer in order to create labels for each song, where each label has each detected beat's timestamp. One of the challenges that I went through was that sometimes Audacity wouldn't detect all of the beats in a song, so there would be entires sections without any beats. I remedied this by creating an algorithm that would be able to traverse these beat-empty regions by regularly sampleing as if the code wasn't looking out for the beats. This way, all locations are the song are extracted, and the extractions near a beat snap to a beat. I do think that the performance of this mosaic could have been improved if I extracted the beats by hand and recorded the duration of each beat that I extracted, which would have been about a measure in each case, but a different amount of seconds. Then I perform the FFT window on each beat that I extract and then save this information in a text file. Then, when I'm doing real-time synthesis, I retrieve the closest sound to whatever is playing and play it for the duration of the original beat; I believe that this approach would have made what was played more coherent. However, hand-labelling beats would have taken too long, so I opted for the beat analyzer instead.