AI Neural Network Displays Human-Like Capacity in Language

neural network

In a scientific breakthrough, scientists create an AI neural network that matches human performance in language learning.

In a discovery that’s nothing short of a breakthrough, researchers Brenden Lake (New York University) and Marco Baroni (ICREA) have published a remarkable study in the journal Nature. In their experiments, they created an artificial neural network that can take new words into its lexicon and has a human-like ability to generalize language.

Background: Language is one of the most important tools that humans have. It’s also what separates us from other animals. The human capacity to create a language for better communication and understanding, at its heart, has the function of “systematic compositionality.” In simple terms, this is our brain’s ability to learn new words, fit them into our vocabulary, and use them as part of normal speech in the future. This is how a language grows and spreads. Notably, this is a very important sign of intelligence and so far, artificial neural networks have been famously known to lack such an ability, depending rather on whatever they were trained on, and not effectively “learning” the way a human mind does.

Here’s the link to the paper and the official blog article published by Nature.com for this study.

The approach they went with is called Meta-Learning for Compositionality or MLC. MLC is the only approach, the researchers note, that can achieve both—The systematicity and flexibility needed for machine learning systems to do human-like generalizations.

The researchers achieved the unthinkable by comparing machines and humans side by side on certain systematic generalization tests. They used instruction-learning tasks to examine the learning process as well as the behavior displayed when faced with ambiguous probes. There was no bias or training involved. Rather, it was more akin to telling the neural network to learn on its own, which is typically how neural networks are trained on vast amounts of variables.

Over the course of these comparisons, the MLC approach achieved human-like generalization and human-like patterns of errors as well.

While we’re obsessing over the next big AI chatbot and the big tech companies are trying to win over the corporations of the world to help them design their own AI models and tools, research into the heart of AI (or even AGI) is going at a similarly breakneck speed. With this news of an artificial neural network being able to match human performance in what we’d largely consider a task of intelligence or even consciousness, we are one step closer to a world where the distinction between man and a self-learning machine is so blurry that it cries dystopia.

By Abhimanyu

Unwrapping the fast-evolving AI popular culture.