The algorithm allows scientists to “read” your mind by deciphering brain scans and could help people who can’t speak communicate with the world.
- The system extracted data from three parts of the brain associated with natural language.
- The model reconstructs arbitrary stimuli that a person hears or thinks into natural language.
- This allowed the system to produce a simple text of a person’s thoughts.
Scientists can now “read” your mind using an AI-powered model specifically designed to decipher brain scans.
Non-invasive breakthrough developed by the University Texas, can help those who cannot speak or type for the first time. However, the method decodes the language in real time.
The method works by feeding functional magnetic resonance imaging (fMRI) into an algorithm, which then reconstructs the arbitrary stimuli a person hears or thinks into natural language.
For example, subjects listened to stories being told while scientists scanned areas of the brain associated with natural language and fed the scans to an AI-powered decoder that returned a summary of what the person was listening to.
Until now, this process has been carried out only by implanting electrodes into the brain.
The new model creates an idea or a summary of a patient’s thoughts by analyzing scanned images and cannot literally decipher what the patient is thinking.
This is the first non-invasive method used to read brain signals. Previously, this was only possible by implanting electrodes into the brain.
Our brain breaks down complex thoughts into smaller pieces that correspond to another aspect of the whole thought. Popular Mechanics reports.
Thoughts can be as simple as a single word like “dog” or as complex as “I should go to the dog”.
The brain also has its own alphabet, made up of 42 different elements that refer to a particular concept, such as size, color, or location, and combine them all to form our complex thoughts.
Each “letter” is processed by a separate part of the brain, so by combining all the different parts, you can read a person’s thoughts.
Although the system cannot decipher the brain verbatim what a person is thinking, it creates a representation of the thought.
The system can also describe what the person saw in the pictures while under the MRI machine.
The team did this by recording fMRI data from three parts of the brain that are associated with natural language while a small group of people listened to 16 hours of podcasts.
Three areas of the brain were analyzed: the prefrontal network, the classical language network, and the parietal-temporal-occipital associative network. New scientist reports.
The algorithm was then provided with scans that compared the patterns in the audio with the patterns in the recorded brain activity, reports The Scientist.
And the system showed that it was able to take a scanned record and convert it into a story based on the content, which the team found fit the idea of stories being told.
While the algorithm cannot make out every “word” in a person’s mind, it is able to decipher the story each person has heard.
A study preprinted in BioKhivpresents the original story: “Look for a message from my wife saying she’s changed her mind and that she’ll be back.”
The algorithm decoded it like this: “When I saw her, for some reason I thought that maybe she would come up to me and say that she misses me.”
The system is not capable of giving verbatim what a person thinks, but is able to give an idea of his thoughts.