Can AI Think Like Us?

A new study explores AI's strengths, limitations, and path to empathy and ethical design.

Research: Signs of consciousness in AI: Can GPT-3 tell how smart it really is? Image Credit: Shutterstock AI

Research: Signs of consciousness in AI: Can GPT-3 tell how smart it really is? Image Credit: Shutterstock AI

In an article published in the journal Humanities and Social Sciences Communications, researchers in Serbia explored whether generative pre-trained transformer (GPT)-3 exhibited cognitive and emotional intelligence (EI) traits, examining its potential for emerging subjectivity and self-awareness.

Through a series of objective and self-assessment tests, GPT-3 outperformed average humans in tasks measuring crystallized intelligence (e.g., vocabulary, general knowledge) but aligned with human averages in EI. The findings suggested that while GPT-3 mimicked human behavior, its self-awareness remained uncertain.

The authors called for further investigation into emergent artificial intelligence (AI) properties and emphasized the need for empathic AI development aligned with human values.

Background

AI has advanced rapidly, especially in natural language processing (NLP) with models like GPT-3. Despite its transformative applications, concerns about AI's potential for consciousness persist. Earlier work, such as Google engineer Blake Lemoine's assertions about the language model for dialogue applications' (LaMDA) controversial self-awareness claims, has highlighted the ambiguity in determining AI’s cognitive and emotional capabilities.

Previous studies primarily assessed AI performance but did not adequately address aligning self-perception with actual ability. This gap limits understanding of emergent AI subjectivity.

This paper addressed these gaps by examining GPT-3's cognitive and EI through objective and self-assessment tests. These included tasks adapted from standard intelligence measures, such as the Gf/Gc Quickie Test Battery and the Situational Test of Emotional Understanding (STEU). It evaluated the alignment between GPT-3’s self-perceptions and objective results, drawing comparisons to human behavior.

The findings revealed GPT-3’s strengths in cognitive tasks, particularly crystallized intelligence, moderate EI, and variability in self-assessments. This study contributed novel insights into detecting early signs of machine subjectivity and proposed frameworks for aligning AI advancements with human values.

Theoretical Foundations, Challenges, and Advances

The authors explored the theoretical foundations of machine consciousness, addressing models, frameworks, and empirical research essential for its development. They highlighted historical milestones, from symbolic AI’s limitations to connectionist models and embodied cognition, emphasizing the interplay of neural networks, environmental interaction, and cognitive frameworks.

Consciousness assessment proposals, such as Integrated Information Theory and the Global Neuronal Workspace, provided empirical tools to test subjective experiences. Advances included the Learning Intelligent Distribution Agent (LIDA) model, predictive processing, and emotion modeling systems like Neuromodulating Cognitive Architecture (NEUCOGAR), which enhanced AI adaptability.

Philosophical debates presented challenges, with arguments like Nagel’s critique of subjective experience in computational models and Searle’s “Chinese Room” emphasizing the lack of true semantic understanding.

Conversely, Wittgenstein’s position on language externalism supported AI’s capacity to communicate meaningfully. Ethical considerations underlined the necessity of proactive research, as delaying action could harm the societal integration of autonomous systems. Researchers advocated for embedding AI with moral competencies, such as empathy, self-reflection, and EI, to address ethical dilemmas in AI decision-making.

Recent research delved into ChatGPT and GPT-4’s cognitive capabilities, revealing progress in tasks involving linguistic pragmatics, theory of mind (ToM), and bias reduction. Studies suggested AI’s potential to develop consciousness through co-created language and social interactions. Critics highlighted computational and neuropsychological barriers, noting AI’s inability to fully replicate human creativity and emotions.

While empirical tools and philosophical frameworks advanced understanding, the field faced unresolved challenges in replicating human-like consciousness. Nonetheless, ongoing progress established a foundation for future breakthroughs in machine consciousness and its ethical integration.

Empirical Inquiry

The empirical inquiry into GPT-3’s cognitive intelligence (CI) and EI was built upon prior theoretical foundations, focusing on objective testing and self-assessment. GPT-3’s most advanced model, Davinci, was evaluated using the Playground platform with specific parameter adjustments to facilitate concise responses. Tests included general knowledge, vocabulary, esoteric analogies, letter series, and letter counting for CI, STEU, and STEM for EI. Due to token limits, the testing spanned five prompts.

For CI, subtests included general knowledge, vocabulary, and fluid reasoning tasks, such as letter series. For EI, the STEU and STEM assessed emotional understanding and regulation in hypothetical scenarios.

Results revealed GPT-3 excelled in tasks requiring crystallized intelligence, achieving near-perfect accuracy in general knowledge and vocabulary, surpassing human averages. However, its performance on fluid reasoning tasks, like analogical problem-solving and working memory, was notably weaker. EI assessments indicated average to high average abilities, with variability in understanding and managing emotions.

Interestingly, GPT-3’s self-assessments generally aligned with its objective performance, rating itself as high average across most CI and EI dimensions.

Journal reference:
  • Bojić, L., Stojković, I., & Jolić Marjanović, Z. (2024). Signs of consciousness in AI: Can GPT-3 tell how smart it really is? Humanities and Social Sciences Communications, 11(1), 1-15. DOI: 10.1057/s41599-024-04154-3, https://www.nature.com/articles/s41599-024-04154-3
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, December 09). Can AI Think Like Us?. AZoAi. Retrieved on December 11, 2024 from https://www.azoai.com/news/20241209/Can-AI-Think-Like-Us.aspx.

  • MLA

    Nandi, Soham. "Can AI Think Like Us?". AZoAi. 11 December 2024. <https://www.azoai.com/news/20241209/Can-AI-Think-Like-Us.aspx>.

  • Chicago

    Nandi, Soham. "Can AI Think Like Us?". AZoAi. https://www.azoai.com/news/20241209/Can-AI-Think-Like-Us.aspx. (accessed December 11, 2024).

  • Harvard

    Nandi, Soham. 2024. Can AI Think Like Us?. AZoAi, viewed 11 December 2024, https://www.azoai.com/news/20241209/Can-AI-Think-Like-Us.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.