js

Weekly Progress

Week 11

Jun 6th 2019

Done with quarter. Enjoy your summer.

Jun 10th 2019

Final Presentation is here
The visualization demo link

Week 10

Jun 6th 2019

Started putting together the website for the demo. Created a basic version with the filtering and video stats

Jun 4th 2019

Worked with ffmpeg to put it all together. Created one movie with soundtrack from a single condition/stage trials.

Week 9

May 30th 2019

Generated different versions of the audio - with normal, lower, and lowest sample rate. The normal and lower versions sound good. The lowest is too low frequency so may need subwoofer to even be able to listen.

May 28th 2019

Wrote to script to systematically generate the images save them to a folder and also generate the wav file to go with it. Played around with the idea of generating 64 channel sound. Didn't get a chance to test it out and am running out of time

Week 8

May 23rd 2019

Presented the preliminary sounds to the class. Chris Suggested slowing down the visual.

May 21st 2019

Generated the wav file by inverse STFT on the spectrogram. Sounds ok. Wrote a python script to generated a single wav file for all trials of a subject.

Week 7

May 16th 2019

Abandoned the idea of single sample and instead considered the Time-freq matrix as an actual spectrogram. Tried to visualize the spectrograms and they look ok.

May 14th 2019

Created a preso for class presentation using single samples. Link

Week 6

May 9th 2019

Started arranging some of the generated tracks. The sounds are pretty short bursts and not very meaningful

May 7th 2019

Wrote python scripts for extracting and sonifying single samples of spectrogram.

Week 5

May 2nd 2019

Gave a lightning presentation about ML and AI in class Link

April 30th 2019

Looked into using single sample waveform as sonification. The results were not great but still worth pursuing

Week 4

April 25th 2019

Checking out some of the timeseries graphs for potential pre-processing and sonification options.

April 23rd 2019

Switching to a sonification approach for 220C. While the classifiers are churning away, there might be interesting crossovers to BCI if I can start with some sonification. On an unrelated note, the new computers for curry and BCI are here and setup in the EEG lab

Week 3

April 18th 2019

Hand tuned some of the CNN layers and ran the same intra/inter subject runs. Not much improvement but again intra does better than inter.

April 16th 2019

Spent some time trying to do intra vs inter subject classification. As expected Intra subject is performing much better than inter confirming the hypothesis that EEG data is more similar within trials involving a single subject than across subjects

Week 2

April 11th 2019

Results so far on traditional Machine Learning models (SVM, RandomForest) have been very poor and comparable to random classifier (~35% for 3-class and ~13% for 9-class). A 18 hidden layer CNN with Max pooling gives ~20% accuracy on 9-class with PCA data and no pre-processing.

Next steps are to pre-process the PCA data to restrict to Beta band (13-30 Hz) and tighten the time window. Once the model has been trained with this data, we can do further analysis of the hidden layers to identify salient features.

April 9th 2019

Presented the 220C Project Idea of using EEG data to generate beats. Currently it is an offline decoding/classification and synthesis project. Short preso

This work builds on the dataset from Emily Graber's PhD Dissertation.

Week 1

April 4th 2019

April 2nd 2019