A study published in the journal Nature Human Behaviour explores psychological factors driving consumer attitudes toward AI systems. As AI proliferates across products and services, understanding resistance is critical for the appropriate adoption of beneficial technologies. The authors organize common barriers into five categories - opacity, emotionlessness, rigidity, autonomy, and non-humanness - tracing each to fundamental aspects of human cognition.
Significantly, they distinguish between AI-related factors and user-related factors within these barriers. The paper reviews empirical evidence on how the barriers shape attitudes and utilization intentions, covering potential interventions to improve acceptance and risks.
A critical insight is that identical AI systems may be perceived differently depending on individual variability in cognition. For instance, people who tend to anthropomorphize non-human entities will likely better accept an AI system framed as having emotions. The authors advocate targeted, context-specific interventions tailored to system capabilities and user inclinations. Suggested strategies include offering transparent explanations to combat opacity concerns, appropriately framing AI as flexible and feeling where accurate, allowing some user oversight to restore control without sacrificing performance, and even directly addressing implicit biases like speciesism.
Understanding the AI System
The inherent black-box nature of many AI systems, where the internal logic and workings are opaque, violates people’s desire for predictability and coherence—not grasping how outputs are generated leads to distrust. However, people will utilize opaque AI if it outperforms transparent versions or humans. Effective interventions include offering explanations of the system’s rationale, especially contrastive ones, although these depend on expectations and must match the complexity of tasks. Moreover, required transparency varies by context - higher stakes necessitate more understanding. Making people generate explanations of human decisions, too, can reduce illusory self-understanding differences from AI.
Transparent design is crucial for the appropriate calibration of trust in AI systems. The engineering priorities of performance maximization are misaligned with explainability. As AI assistants, companions, and decision-support tools permeate daily life, users require sufficient comprehension of their functioning to ensure safe, ethical adoption. System creators thus face tradeoffs between accuracy and interpretability. Regulations mandating transparency would enable oversight but risk impairing cutting-edge innovations if balanced with the utility. User education on actual system capabilities and their opaqueness also helps mitigate opacity barriers.
Framing AI Capabilities
The deeply rooted tendency to anthropomorphize non-human entities often excludes emotional abilities from ascriptions to AI. People view such systems as adept only at rational, objective tasks compared to subjective ones requiring feelings. This drives more skepticism and a lower willingness to rely on AI for social, creative, communicative, and interpersonal contexts versus logistical ones. However, properly designed systems already match or outperform humans on many emotional and subjective capabilities. Strategically framing tasks objectively and highlighting AI achievements in emotional domains alleviates resistance.
Explicit anthropomorphization, like assigning names and gendered voices, significantly improves user attitudes and trust. However, ascribing a broad range of human-like attributes risks incorrect mental models of system abilities and risks from uncontrolled edge cases. Moreover, for sensitive contexts where anonymity is preferred, like medical diagnoses, anthropomorphism may backfire by reducing desirable dehumanization perceived from AI interactions. However, thoughtful humanization of intelligent systems can enable greater user receptivity.
Demonstrating Learning
The historical view of machines as rigidly coded with predefined unchanging instructions persists for modern AI. Despite the proliferation of self-improving algorithms, people doubt systems can learn from experience like humans. Believing AI cannot correct mistakes reduces reliability and willingness to delegate decisions. However, interventions that directly display dynamic learning capacities, like performance trajectories, boost adoption versus static accuracy statistics. Showcasing improvement on subjective tasks also counters emotionless barriers simultaneously.
Lifelong, autonomous learning is a defining feature of artificial intelligence. However, user perceptions significantly trail behind this reality. Many still consider AI as programmed rules rather than data-driven inferences, interpreting outputs as scripted rather than probabilistic. Updated mental models are imperative for appropriate trust calibration and productive human-AI collaboration. Scientists and developers should continue explicating system adaptability while companies must communicate it clearly to consumers. Continued exposure to AI’s iterative growth will gradually familiarize the public with its learning abilities.
Emphasizing Oversight
People inherently seek predictability and dominance over their environments. AI systems that independently set and adapt goals threaten personal agency. Making decisions without human oversight raises apprehensions of losing control or risks from unconstrained optimization. It explains the prevalent preference for augmented intelligence that assists people rather than replaces them. Restoring some oversight, like approving system plans, increases acceptance even at accuracy costs. Autonomous motion following predictable paths also reassures users.
However, excessive control concessions can significantly impact performance, creating dilemmas for designers. Ideal system configurations likely combine automation with human guardrails. The appropriate balance depends on use cases and risk factors - complete self-direction is unsuitable for medical diagnoses but feasible for mundane logistics. Responsible development mandates understanding and respecting when users demand involvement. Investigating attitudes around meaningful tasks that provide psychological value through manual completion will better inform appropriate autonomy allocation in AI product experiences.
Future Outlook
This psychological framework offers considerable utility for improving societal absorption of transformative technologies, provided interventions balance objectives and ethics. Since attitudes evolve across cultural settings and over time, as familiarity grows, continual tracking is needed, especially as systems gain increasingly generalized capabilities. Extending investigations around social and collective applications would also be worthwhile, given AI’s expanding interpersonal roles. Further work should assimilate these modern factors with decades-old models on technology acceptance for a consolidated understanding.
Overall, aligning human and artificial intelligence requires technologists and behavioral scientists collaboration. Advancing attitudes and behaviors involves grappling jointly with engineering and human realities. Beyond building performant systems, it is important to elucidate cognition, address subjective perceptions, and craft appropriate interventions that allow calibrated reliance and safe adoption of AI capabilities in everyday life. This initial perspective underscores the significance of supplementary psychological research for realizing AI’s promise.
Journal reference:
- De Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological factors underlying attitudes toward AI tools. Nature Human Behaviour, 7(11), 1845–1854. https://doi.org/10.1038/s41562-023-01734-2, https://www.nature.com/articles/s41562-023-01734-2