An audio effect to extend the sustain of a musical note in real-time is implemented on a fixed point, standalone processor -- DSP shield running TI C5535. Onset detection is performed with a leaky integrator to look for new musical notes, and once they decay to steady state, the audio is looped indefinitely until a new onset comes along. To properly loop the audio, pitch detection is performed by looking for local minima in the Average Magnitude Difference Function (AMDF) to extract a period and the new output buffer is written in a phase aligned manner.
Effectrons is a guitar effects chain built in C++ with OpenFrameworks. The audio input is the nucleus of the cell, and the effects are electrons that rotate around in different orbital radii. Shoot lasers to hit the effects and change their radius to change effect parameters and generate new sounds. Rock it out with a guitar, or a GameTrak if you have one!
Chuck-o-der is a vocoder built in ChucK that takes in input from the microphone and a MIDI synth, and modifies the magnitude spectrum of the MIDI synth according to the magnitude spectrum of the microphone input. This essentially is cross-synthesis with microphone input as the modulator and MIDI synth input as the carrier. Change the texture and pitch of your voice with the MIDI synth, and produce cool vocal sounds.
A fundamental MIR task that involves harmonic analysis of Western Music is chord recognition. Chord recognition is the first step in more advanced tasks such as genre and mood identification of songs. The first step is to compute the Pitch Class Profile which gives us an idea about the constituent notes of a chord. To map individual musical notes into the frequency domain, the Constant Q Transform comes is used. In monophonic music, a simple Binary Template Matching is fairly successful in recognising chords. However, in complex polyphonic music advanced methods such as Hidden Markov Models need to be used.
Speaker Recognition is the process of identifying a unique speaker by analyzing their speech. The main objective was to implement well-known speaker recognition algorithms in Python, which is becoming the language of choice for scientific computing. Two features - MFCCs and LPCs are extracted from each speaker and Vector Quantization (or K-means clustering) with LBG (Linde, Buzo, Gray) algorithm is used to train the data set and form speaker-specific codebooks.
Audio effects are used extensively in music performance and production. With the aid of digital computers and simple audio filters, it is possible to simulate them easily on a microcontroller/ personal computer. This was a presentation done as part of a Seminar in my pre-final year curriculum, where I merely scratched the mad world of guitar effects. The signal processing behind distortion, overdrive, wah-wah, tremolo and flanger was explored and implemented in MATLAB. I wish to expand this and build a real-time Arduino based guitar effects processor.