• Call-in Numbers: 917-633-8191 / 201-880-5508

  • Now Playing

    Title

    Artist

    5 min

    The 15-second audio clip sounds like a muddy version of a Pink Floyd song, like someone is singing it underwater. There’s no denying that the words and rhythms stem from “Another Brick in the Wall, Part 1” — the first in a song trilogy on the band’s famous 1979 album “The Wall.”

    Except Pink Floyd didn’t perform any of the music in the clip. Instead, the track was crafted by a team of researchers at the University of California at Berkeley, who looked at the brain activity of more than two dozen people who listened to the song. That data was then decoded by a machine learning model and reconstructed into audio — marking the first time researchers have been able to re-create a song from neural signals.

    Scientists attempted to reconstruct Pink Floyd’s “Another Brick in the Wall, Part 1” by recording the neural activity of 29 patients who listened to it. (Video: Bellier et al., 2023, PLOS Biology)

    By doing so, the scientists hope to someday use similar technology to help patients with speech impediments communicate with others — while also gathering more data about how the brain processes sound and music, according to a study published Tuesday in the scientific journal PLOS Biology.

    “Music adds the more emotional and melodic elements of speech,” said Robert Knight, a neuroscientist at UC-Berkeley and an author of the study. “So understanding how it’s processed in the brain and how we can decode it is really like the first brick in the wall.”

    How brain waves could be used to create an eerily similar version of a Pink Floyd song is a process that began in 2009 inside a hospital in Albany, N.Y. There, 29 patients who were undergoing epilepsy treatment — namely, having a net of electrodes implanted in their brains to identify the location of drug-resistant seizures — volunteered to have their brain activity recorded while “Another Brick in the Wall, Part 1” played.

    The seemingly mundane activity of listening to music is actually a complex process. Sounds, or rather vibrations moving through the air as waves, enter the inner ear, where they’re turned into an electrical signal that’s sent to the brain, an organ that runs on electricity. Once the sound enters the brain, neurons shoot up across different parts to decode each lyric, melody and rhythm.

    The electrodes connected to the patients’ brains gave the scientists insight into that process, said Knight, who likened the wires to “piano keys.”

    “We’re trying to decode how that piano key is activated by the sound that comes in, in this case from Pink Floyd,” he said. “So now we have all these electrodes — about 92 per research subject — and we take all that data to know how each musical note or how the rhythm in the Pink Floyd song is affected in each of these electrodes.”

    The decision to use a Pink Floyd song was simple, Knight said. Every patient they were studying liked Pink Floyd, he said. And they chose “Another Brick in the Wall, Part 1” — instead of the better-known Part 2 — because it’s “a little bit richer in vocals and harmonics,” Knight added.

    From 2009 to 2015, changes in the 29 patients’ brain activity were converted into a massive data set. But the project was put on hold for nearly a decade — until a new postdoctoral researcher who had played in a band joined Knight’s team and offered to decode it.

    Ludovic Bellier, a lifelong musician who is now a senior computational research scientist at the biotech company Inscopix, said he was eager to lead a project that merged his two passions. As he began analyzing the mounds of data, he said he found something “absolutely fascinating”: When the 16th notes of the song’s guitar rhythm played, specific parts of the patients’ temporal lobes fired up.

    “That had never been seen before,” Bellier said.

    They homed in on a region of the brain located right above and behind the ear that is largely responsible for processing sounds, known as the superior temporal gyrus. That area — especially on the right side — was particularly active as patients listened to the Pink Floyd song, which could indicate that the spot on the brain is responsible for the perception of rhythm, Bellier said.

    “If you have just a few electrodes, you should put them there,” he added. “That’s the most promising region to make sense of the musical information.”

    Then, the numbers Bellier crunched were turned into music using AI — a powerful machine-learning model that took into account how the brain responded to a combination of sound frequencies. The patterns were turned into a spectrogram, or a visual representation of a sound’s frequencies and how they change over time — and then into a sound file that turned out to be closely reminiscent of the original Pink Floyd song.

    The research could lead to new medical treatments for those who have lost their ability to communicate, Knight and Bellier said. While scientists have made strides in developing machines that translate brain signals into vocals, speech-generating devices often have a robotic sound to them — and the new study could help change that, Knight said.

    “Music, with its complex and strong emotional and rhythmic elements, would allow us to add that expressiveness,” he said.

    The study, Bellier added, also opens up possibilities of composing music through thought and paves the way for medical applications like “a keyboard for the mind” — or a machine that might help decode the words patients want to say.

    “There’s potentially many clinical applications of understanding music besides the fact that, you know, it is cool to do it,” Knight said.

    The song selection, he added, appeared to be the “right choice” given the interest the research had harnessed since it published.

    But maybe next time, the study should be re-created with a song by Taylor Swift, he joked.

    Loading...

    Read More


    Reader's opinions

    Leave a Reply