When you measure brain activity with an EEG machine, the result is an unreadable stream of meaningless parallel squiggles–at least to the untrained eye. Even for experts, it can be nearly impossible to know which activity spikes are “normal” and which ones are a seizure.
By “visualizing” brain activity in a completely different way, a couple of Stanford scientists have just made it easier for non-scientists to see (or hear) brain seizures. Their tool, a “brain stethoscope,” translates neural activity into music to help caregivers understand what’s happening inside a patient’s head.
People have turned EEGs into music before, but never for practical care-based reasons. Researchers at the University of Electronic Science and Technology of China combined EEGs with fMRI readings to create “brain soundtracks,” which (while entertaining) weren’t useful for medical staff. The Stanford scientists, on the other hand, expressly set out to map the translated music to the various stages of patient seizures (preictal, ictal, and postictal).
The idea hit Josef Parvizi, a neurologist at Stanford Medical Center, while he was listening to a quartet based on spaceborne radio signals; he wondered what the myriad neural signals in the brain might sound like. Parvizi gave a willing patient’s EEG to Chris Chafe, professor of music research and expert of the “musicification” of natural signals into music, who translated the brain activity spikes into human-sounding tones.
As Parvizi notes in the YouTube comments, the patient in the above recording is sitting quietly in bed and is not having a convulsive seizure–so spatial and temporal confusion are the only external indicators the seizure is happening. After translating the EEG to audio, Parvizi was easily able to discern the patient’s seizure progression:
“Around 0:20, the patient’s seizure starts in the right hemisphere, and the patient is talking and acting normally. Around 1:50, the left hemisphere starts seizing while the right is in a postictal state. The patient is mute and confused. At 2:20 both hemispheres are in the postictal state. Patient is looking around, still confused, trying to pick at things, and get out of bed.”
In other words, it’s pretty easy to tell when the particular noises adopt different patterns, which could lead to a prominent device for monitoring brain activity. Parvizi’s setup is somewhat invasive, with over 100 electrodes hooked to neurons inside the brain–other electrode readers simply scan the scalp. Chafe then chose specific electrode-neuron pairings and keyed them to a female singer; increased neuronal activity ramped up the pitch and inflection of the voice.
The voices are carefully arranged around the brain, so it’s easy to differentiate activity between the hemispheres, which is useful in tracking the origin of seizure activity. As a result, their “brain stethoscope” is a simple translation software package that subtly relies on a third processor–your brain–to determine when those voices are clearly out of whack.
Both Chafe and Parvizi are developing a real-time EEG-to-music device funded in part by Stanford’s Bio-X Interdisciplinary Initiatives Program and hope to have a version out next year to display in Stanford’s Cantor Art Center. The prototype will use a non-invasive electrode headset to transmit EEGs to a handheld device, which will instantly translate the brain activity to music.