The machines can now decode your brain activity to determine what music you are grooving to…
For the actual paper and to hear the decoded sounds please see this link.
Researchers hope brain implants will one day help people who have lost the ability to speak to get their voice back—and maybe even to sing. Now, for the first time, scientists have demonstrated that the brain’s electrical activity can be decoded and used to reconstruct music.
A new study analyzed data from 29 people who were already being monitored for epileptic seizures using postage-stamp-size arrays of electrodes that were placed directly on the surface of their brain. As the participants listened to Pink Floyd’s 1979 song “Another Brick in the Wall, Part 1,” the electrodes captured the electrical activity of several brain regions attuned to musical elements such as tone, rhythm, harmony and lyrics. Employing machine learning, the researchers reconstructed garbled but distinctive audio of what the participants were hearing. The study results were published on Tuesday in PLOS Biology.
Neuroscientists have worked for decades to decode what people are seeing, hearing or thinking from brain activity alone. In 2012 a team that included the new study’s senior author—cognitive neuroscientist Robert Knight of the University of California, Berkeley—became the first to successfully reconstruct audio recordings of words participants heard while wearing implanted electrodes. Others have since used similar techniques to reproduce recently viewed or imagined pictures from participants’ brain scans, including human faces and landscape photographs. But the recent PLOS Biology paper by Knight and his colleagues is the first to suggest that scientists can eavesdrop on the brain to synthesize music.
“These exciting findings build on previous work to reconstruct plain speech from brain activity,” says Shailee Jain, a neuroscientist at the University of California, San Francisco, who was not involved in the new study. “Now we’re able to really dig into the brain to unearth the sustenance of sound.”
To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery.
Why did the team choose Pink Floyd—and specifically “Another Brick in the Wall, Part 1”? “The scientific reason, which we mention in the paper, is that the song is very layered. It brings in complex chords, different instruments and diverse rhythms that make it interesting to analyze,” says Ludovic Bellier, a cognitive neuroscientist and the study’s lead author. “The less scientific reason might be that we just really like Pink Floyd.”
The AI model analyzed patterns in the brain’s response to various components of the song’s acoustic profile, picking apart changes in pitch, rhythm and tone. Then another AI model reassembled this disentangled composition to estimate the sounds that the patients heard. Once the brain data were fed through the model, the music returned. Its melody was roughly intact, and its lyrics were garbled but discernible if one knew what to listen for: “All in all, it was just a brick in the wall.”
They could have chosen worse songs for the experiment…
-
Head music.