Can AI Learn Like a Child? Scientists Say Yes!

Unlike ChatGPT, which predicts words based on massive text datasets, a new AI approach enables machines to learn language like humans—through interaction, intention reading, and real-world experience, leading to more meaningful and efficient communication.

Research: Humans Learn Language from Situated Communicative Interactions. What about Machines?Research: Humans Learn Language from Situated Communicative Interactions. What about Machines?

"Children learn their native language by communicating with the people around them in their environment. As they play and experiment with language, they attempt to interpret the intentions of their conversation partners. In this way, they gradually learn to understand and use linguistic constructions."

This process relies on two key cognitive mechanisms: ‘intention reading’—understanding others’ communicative intent—and ‘pattern finding’—identifying linguistic structures from repeated exposure.

"This process, in which language is acquired through interaction and meaningful context, is at the core of human language acquisition," says Katrien Beuls.

"The current generation of large language models (LLMs), such as ChatGPT, learns language in a very different way," adds Paul Van Eecke. "By observing vast amounts of text and identifying which words frequently appear together, they generate texts that are often indistinguishable from human writing. "

This approach follows the tradition of ‘distributional linguistics,’ which assumes that word meaning is derived from statistical co-occurrence rather than real-world grounding.

"This results in models that are extremely powerful in many forms of text generation—such as summarizing, translating, or answering questions—but that also exhibit inherent limitations. They are susceptible to hallucinations and biases, often struggle with human reasoning, and require enormous amounts of data and energy to build and operate."

The researchers propose an alternative model in which artificial agents learn language as humans do by engaging in meaningful communicative interactions within their environment. Through experiments, they demonstrate how artificial agents, equipped with sensory input, develop linguistic constructions based on direct interactions rather than relying solely on text-based learning. This leads to language models that:

  • Are less prone to hallucinations and biases, as their language comprehension is grounded in direct interaction with the world.
  • Use data and energy more efficiently, resulting in a smaller ecological footprint.
  • Are more deeply rooted in meaning and intention, enabling them to understand language and context in a more human-like manner.

"Integrating communicative and situated interactions into AI models is a crucial step in developing the next generation of language models." 

Unlike LLMs, which process language without understanding communicative intent, these new models enable AI to infer meaning pragmatically—similar to human reasoning.

"This research offers a promising path toward language technologies that more closely resemble how humans understand and use language," the researchers conclude.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.