The result suggests that the A.I. decoder was capturing not just words but also meaning. “Language perception is an externally driven process, while imagination is an active internal process,” Dr. Nishimoto said. “And the authors showed that the brain uses common representations across these processes.”
Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that was “the high-level question.”
“Can we decode meaning from the brain?” she continued. “In some ways they show that, yes, we can.”
This language-decoding method had limitations, Dr. Huth and his colleagues noted. For one, fMRI scanners are bulky and expensive. Moreover, training the model is a long, tedious process, and to be effective it must be done on individuals. When the researchers tried to use a decoder trained on one person to read the brain activity of another, it failed, suggesting that every brain has unique ways of representing meaning.
Participants were also able to shield their internal monologues, throwing off the decoder by thinking of other things. A.I. might be able to read our minds, but for now it will have to read them one at a time, and with our permission.
Related Posts
Eurozone jobless lowest since 2011
India ‘meteorite death’ to be probed
FA pave the way for Chelsea and Tottenham to share Wembley Stadium in 2017
Mexico mayor slain a day after taking office
Jose Mourinho given vote of confidence by Chelsea but is told he must salvage season without big-money signings in January
Greek PM quits and calls early poll