In a paper published on the preprint server PsyArXiv, researchers explored whether artificial intelligence (AI) systems could be intelligent and conscious by examining what leads some to believe AI might develop consciousness and identifying biases that lead astray. The paper questioned what it would take for conscious AI to be a realistic prospect, challenging common assumptions such as the idea that computation alone provided a sufficient basis for consciousness.
*Important notice: OSF publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Instead, the analysts argued for the possibility that consciousness might depend on nature as living organisms, a concept known as biological naturalism. They concluded by discussing broader issues, including testing for consciousness in AI and the ethical considerations arising from AI that either was or convincingly seemed to be conscious.
Conscious AI
As AI systems advance, questions about their potential consciousness arise despite a need for more consensus on the conditions for consciousness. The idea that AI might become conscious is widespread, impacting both AI development and societal perceptions. This paper explores the prospects and pitfalls of creating conscious AI, addressing psychological biases that inflate these prospects, and challenging assumptions like computational functionalism and substrate neutrality.
The article concludes by discussing broader issues, including testing for AI consciousness, the importance of embodiment and embeddedness, and ethical concerns about AI that appears conscious. Despite the remote possibility of true artificial consciousness, significant ethical and societal issues arise from AI systems that convincingly mimic consciousness.
Consciousness and Computation
Computational functionalism posits that certain kinds of computations can instantiate consciousness, deriving from the broader philosophical notion of functionalism, which argues that consciousness depends on what a system does rather than its material composition. Functionalism posits that a system possesses a mind if it has the correct functional structure, where mental states are defined by their relationships with sensory inputs, motor outputs, and other mental states.
Applied to consciousness, computational functionalism claims that the relevant functional organization is computational, consisting of computational parameters, inputs, and outputs. This view is distinct from questions about whether consciousness has functions or plays useful roles within a system.
Substrate neutrality, closely tied to computational functionalism, posits that similar mental states can occur in systems with different physical substrates if they implement the relevant computations. The 'neural replacement' thought experiment supports this idea, proposing that replacing brain cells with silicon alternatives preserving functional organization would not affect consciousness. However, critics argue that differences in internal processes and overall behavior would emerge, challenging the assumption that consciousness can be substrate-neutral.
Further, the concept of 'mortal computation' suggests that biological brains, which are energy-efficient, implement computations that cannot be separated from their physical substrate, limiting the feasibility of substrate-neutral conscious AI.
While these arguments do not disprove computational functionalism, they cast doubt on its plausibility and the feasibility of conscious AI. Brains exhibit complex, multiscale activity influenced by chemical diffusion, physical neural structures, and metabolic processes, challenging the metaphor of the brain as a computer. This complexity suggests alternative possibilities, such as non-computational functionalism, where consciousness depends on functional organization but not computations. This perspective aligns with dynamical systems theory, which models brain activity without assuming a computational stance, offering a different way to understand the relationship between consciousness and neural processes.
Biological Consciousness
Biological naturalism posits a fundamental connection between life and consciousness, asserting that consciousness emerges from specific biological processes within living organisms. This perspective, distinguished from biopsychism, contends that consciousness is not ubiquitous among all living systems but arises in those with certain biological properties. John Searle's critique of computational functionalism, notably through his Chinese room argument, underscores the distinctness between syntax and semantics in understanding consciousness, albeit primarily addressing intelligence. In contrast, biological naturalism focuses on consciousness as a biological phenomenon rooted in neural and possibly broader biological activities within the brain and body.
The predictive processing framework provides a compelling account of how consciousness might manifest in biological systems. It posits that perceptual content is not passively received but actively inferred through predictive error minimization. This active inference involves the brain generating predictions about incoming sensory signals, continually updating them to reduce prediction errors.
Consciousness emerges due to the organism's continuous pursuit to sustain its physiological integrity and adapt to its environment. This intricate process is closely intertwined with the autopoietic functions of the organism, which involve self-maintenance and regulation and the homeostatic imperatives that ensure stability amidst external changes. Thus, biological naturalism, bolstered by predictive processing theory, offers a coherent perspective on how certain biological characteristics underpin the emergence of consciousness in living beings.
Conclusion
In summary, the human creation of technologies in their image and projection into them obscured clarity on the implications of consciousness and AI. Equating human experience with AI's capabilities potentially undervalued humanity. Consciousness might not emerge in conventional AI systems, contrasting with biological naturalism's view that it arises from specific biological processes. Advancements in technology continually challenged the understanding of consciousness and its potential applications in synthetic biology.
*Important notice: OSF publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Journal reference:
- Preliminary scientific report.
Seth, Anil. “Conscious Artificial Intelligence and Biological Naturalism.” PsyArXiv, 30 June 2024. Web., DOI:10.31234/osf.io/tz6an, https://osf.io/preprints/psyarxiv/tz6an
Article Revisions
- Jul 4 2024 - Correction to publication location incorrectly cited as a journal when in fact a preprint server, prior to peer review.