RESEARCH & ANALYSIS

Google’s Brain2Music AI deciphers neural signals to recreate the music that resonated with your preferences

Google is currently developing an innovative AI known as ‘Brain2Music,’ which utilizes brain imaging data to generate musical compositions. According to a recent research publication jointly authored by Google and Osaka University and available on arXiv, this AI model has the capability to craft music that closely emulates segments of songs that an individual was listening to during their brain scan.

The mechanism behind this involves functional magnetic resonance imaging (fMRI), a technique that monitors the distribution of oxygen-rich blood in the brain to identify its most active regions. In this study, fMRI data was collected from five participants who were exposed to 15-second music clips spanning different genres, such as classical, blues, disco, hip-hop, jazz, metal, pop, reggae, and rock.

By employing the collected brain activity data, a sophisticated deep neural network was trained to establish connections between patterns of brain activity and musical attributes like emotion and rhythm. These attributes were subsequently categorized into various moods, such as tender, sad, exciting, angry, scary, and happy.

Tailored to each individual within the study, Brain2Music successfully translated brain data into original music clips. The generated music was then input into Google’s MusicLM AI model, which can create music based on textual descriptions. Notably, the study unveiled a noteworthy correlation between the internal representations of MusicLM and brain activity in specific brain regions when both the AI and a human were exposed to the same music.

Yu Takagi, a professor specializing in computational neuroscience and AI at Osaka University and a co-author of the research, shared with LiveScience that the central objective of the project was to gain insights into how the brain processes music. Nevertheless, the research paper highlights a significant caveat: due to the unique wiring of each person’s brain, applying a model designed for one individual to another may not be feasible.

It’s important to note that the practicality of this technology in the near future seems unlikely, as recording fMRI signals necessitates lengthy sessions within an fMRI scanner. However, future investigations may explore the possibility of AI reconstructing the music that individuals envision in their minds.

Leave a Reply

Your email address will not be published. Required fields are marked *