« SoVND »

Sonifying & Visualizing Neural Data

Mindy Chang

Idea / Premise

Our brains are able to manage a great deal of information, from taking in sensory perceptions to forming decisions and transforming plans to actions. Somehow this is achieved by a network of billions of interconnected neurons, which communicate with each other by sending electrical pulses called action potentials. In the lab, we can record the activity of single neurons through electrodes placed in the brain while subjects (in our case monkeys) are performing specific tasks. Neural responses are often very diverse, and when trying to understand how a population of neurons might work together, simply averaging across all neurons results in a loss of information, while plotting the raw responses of all neurons can quickly become overwhelming to interpret.

Sonification offers a complementary way to explore the data and in a literal sense ties in closely with the idea of listening to a dynamic conversation among neurons. During single electrode experiments, electrophysiologists listen to the amplified output while lowering the electrode into the brain in order to estimate depth and identify neurons. Once a neuron is isolated, listening to its spike train (which sounds like pops/clicks) provides a fast and convenient way to pick out for example how well a neuron responds to a particular stimulus in real time since one can listen to the neural activity while visually paying attention to the stimulus on the screen rather than splitting attention between the screen and a plot of the neural activity. Beyond a few neurons, it becomes difficult to hear nuances within the population activity. The idea behind this project is to concurrently sonify and visualize activity from a population of neurons both to try and provide an intuitive way to pick out patterns in the data and to explore different ways in which signals from the brain can be used to create music.


Vision

Ultimately, the program should provide the user with an integrated audio and visual experience, seeing the stimulus display that the monkey sees as well as visualizing and listening to the simultaneous responses of multiple neurons. The user should have the flexibility to choose how to map the sound in order to make the features they’re listening for more salient. There are a number of data parameters that can be mapped onto different features of sound and visualization:


Product

The current implementation provides a way to explore data offline (i.e. after the data has been collected). The user can load different combinations of single trials, average trials, or average difference trials for a particular task. The rasters are labeled and stacked as blocks on the left side of the screen. Within a raster, each row represents one neuron. For single trials, each dot represents a spike, and for the average and difference plots, spike rate is indicated by the color of the heat map. The average spike rate across all neurons is shown below the raster blocks and highlighted for the current raster.

A green bar moves across time, and the current task screen updates with what the monkey is viewing at the current time. For single trial rasters, the dots representing spikes are enlarged for the current time. The user can click anywhere on any raster to change the current time and use the keyboard to change certain parameters of the sound, such as the data to sound mapping, speed of playback, and data integration time. To change the instrumentation, the user must manually swap out ChucK scripts. The labeled screenshot below shows an example with 2 single trials, 2 average trials, and one difference trial raster.


Examples

Background

Visual attention plays a crucial role in how we experience the world. Even though it may seem like we can see everything around us, only a limited amount of information is actually selected for detailed processing. Since we have the highest visual acuity at the center of gaze, our eyes are constantly scanning to bring different visual information into focus. We can also attend to peripheral locations while keeping our eyes fixed, for example when keeping an eye on the road but monitoring the surroundings while driving. In the brain, a hypothesis is that maybe the same mechanisms that are used for controlling eye movements are also used for allocating attention.

Covert Spatial Attention Task

Monkeys were trained to direct and sustain attention at a peripheral location without the use of eye movements. The monkey must maintain fixation at the center of the screen and use a lever to indicate whether 1 grating embedded among 5 distractors changes orientation across 2 flashes. A spatial cue is given early in the trial, and in order to detect the grating change, the monkey needs to direct attention to the cued location.

We recorded from neurons in the frontal eye field (FEF), which is an oculomotor area known to play a role in controlling eye movements. The FEF contains a spectrum of visual to (eye) movement responsive cells, which form a map of visual space. For a given neuron, the particular region of space that it is interested in is called its response field (RF). We can find the RF's of individual neurons by electrically stimulating at the recording site, which causes the monkeys to make a stereotyped eye movement (saccade) towards one part of visual space. We are interested in the neural responses when the monkey is attending to the RF of the recorded neurons as compared to when the monkey is attending elsewhere. Within this task, individual neurons show vastly different response profiles even though the monkey does not make any eye movements. As a population, the spike rates of these neurons encode whether the monkey is paying attention to a particular area in visual space throughout the duration of each trial. Here, the neurons are grouped so that their RFs are in the lower left corner of the screen.

Mappings:

  1. average spike rate of all neurons to pitch
  2. each neuron assigned a pitch, which is played each time the neuron spikes
  3. average spike rate of each neuron across multiple trials, sampled every 10 ms and mapped to pitch with neurons grouped into 2 sets of instruments based on visual responsiveness
Features to listen for:
  • visual responses to stimuli presented in the response field
  • sustained activity when the monkey is attending the RF location even when screen is blank
  • enhanced visual responses to the grating in the RF when it is attended vs. not attended

Sensory-guided decision making and motor preparation task

In this task, the monkey is shown 2 targets in the periphery, and a random dot pattern appears in the center of the screen for 800 ms. The monkey must determine whether the dots are generally moving towards one target or the other and then shift its eyes to that target later in the trial. On different trials, the strength of the motion signal towards one target or the other is varied from 0 to 40%. Once the fixation spot turns off, the monkey can move its eyes to the chosen target. Neurons were recorded from an area of the brain near or overlapping with the FEF. Below, trials are grouped according to the motion signal strength and the monkey's target choice. The greater the motion coherence of dots, the more information the monkey (and neurons) has for deciding on and planning the upcoming eye movement.

Mappings:

  1. average spike rate of each neuron across multiple trials, sampled every 10 ms and mapped to pitch with neurons grouped into 4 sets of instruments based on motion selectivity
  2. average difference between spike rates of each neuron for T1 choices minus T2 choices, sampled every 10 ms and mapped to pitch with neurons grouped into 4 sets of instruments based on motion selectivity
Features to listen for:
  • activity builds up as the monkey decides on T1 (in or near the RFs of the neurons) and prepares to move its eyes there
  • conversely activity becomes suppressed as the monkey decides on T2 (away from the neurons' RFs)
  • earlier and stronger signals in the neurons when the monkey is presented with increasing motion signals


System Design

The system enables the user to load neural spike train data along with information about the trial and then be able to interact with a graphical interface to choose how to sonify and visualize the data. The software processes the data and then maps different features of the processed to data to sound and visualization parameters. Some considerations to take into account when processing the data include whether/how to window the discrete time series of spikes, whether to normalize the firing rates of different neurons, and whether to assign groupings/categories to different types of neurons. The graphical display is rendered using OpenGL/GLUT; sound is synthesized using ChucK with timing controlled by RtAudio.

Usage

Milestones

1. create a sample data set from existing data; prototype potential sound schemes in Matlab/Chuck
2. Implement a minimal essential system that includes at least one sound and visualization mode for the example data set
3. Include interactive components for manipulating data / switching modes