TWIFT | Digital | Anything usefull in our heads?

Anything usefull in our heads?

Can artificial intelligence (AI) help us understand how the brain understands language? Or is studying how we perceive language will move us forward in AI development?

Recent research by Alexander Huth and Shailee Jain from The University of Texas at Austin makes us believe it works both ways. These two geeks published a research where they tested how accurate artificial intelligence algorithms can predict our brain reacts to all types of crock of shit.

In a nutshell: when you hear something, you brain recognizes the word and connects it to the idea or concept and so you understand the bullshit somebody’s feeding you. But words are tricky and they have multiple meanings. And in such a case your brain listens not to this word only, but to everything that came before, the context.

So, basing on this actually obvious idea, Huth and Jain used a recurrent neural network called long short-term memory (LSTM). This network calculated what relation each word had with what was said beforу and could understand the context. But before they needed some data to feed to this devilish machine.

To get the initial data they put some people into fMRI – functional magnetic resonance imaging – and read a story to them. The device registered how different areas of brain activated when different concept were mentioned. Basically, they could see where language concepts are “placed” in a brain. Cool, huh?

Those massives of data were then analyzed using powerful supercomputers at the Texas Advanced Computing Center (TACC). The aim was to train a language model using the data collected together with the LSTM method. And this model learned to predict what word should come next, just like you phone writes “alone” when you type “forever”, but better.

The system using the scanning data could predict how the brain would react to words while listening to a new story. The research showed that something like 20 words that came before made a solid base for predicting the following word. So the language model learned to rely on context as much as our brain. The more context, the more accurate the prediction was.

That is already a good thing to know about how language prediction works. But those bastards went even further. They wanted to know which areas were more sensitive to context provided. God knows how this idea came to them but the results were cool as well.

Huth and Jain discovered that more simple concept, that don’t need a lot of supporting context, were localized in one areas while brain areas that are responsible for higher-level thinking were easier to pinpoint. Huth underlined that it showed correspondence between the hierarchy of the artificial network and the hierarchy of the brain.

The results they got from their research made Huth and Jain think that it would be easier to develop a system which can predict directly brain response rather than a language prediction model. The investigation on how brain responds to language using AI can help us better understand how the brain works and then that information can be used to develop a more advanced artificial intelligence. It seems like human-like and thinking robots are coming our way.

Related Articles   Related Articles    Related Articles   Related Articles    Related Articles   Related Articles    Related Articles   Related Articles