While some cultures welcome AI in decision-making roles, others fear its impact on justice and healthcare. A new study reveals that fear of AI judges and doctors stems from concerns about fairness, empathy, and trust, highlighting the need for culturally sensitive AI deployment strategies.
Research: Fears About Artificial Intelligence Across 20 Countries and Six Domains of Application. Image Credit: Deemerwha / Shutterstock
How would you react to receiving a diagnosis from an AI doctor? Would you trust a courtroom verdict delivered by an AI judge? Would you rely on news stories written entirely by a machine? Would you feel motivated to work under an AI manager? These questions are at the heart of a recent study examining widespread concerns about AI replacing human workers while revealing cultural differences in how people view AI's involvement in six key occupations: doctors, judges, managers, caregivers, religious leaders, and journalists.
Over 10,000 participants from 20 countries, including the United States, India, Saudi Arabia, Japan, and China, evaluated these six occupations using eight psychological traits: warmth, sincerity, tolerance, fairness, competence, determination, intelligence, and imagination. They also assessed AI's potential to replicate these traits and expressed their levels of fear regarding AI taking over these roles. The findings suggest that when AI is introduced into a new job, people instinctively compare the human traits necessary for that job with AI's ability to imitate them. Notably, the level of fear felt by participants seems to be directly linked to the perceived mismatch between these human traits and AI's capabilities.
The researchers revealed substantial differences in fear levels between countries. India, Saudi Arabia, and the United States report the highest average fear levels, particularly regarding AI in roles such as judges and doctors. Conversely, countries like Turkey, Japan, and China display the lowest fear levels, suggesting that cultural factors, such as historical experiences with technology, media narratives, and AI policies, significantly shape attitudes. AI-related fears in Germany are moderate, falling between the higher and lower levels observed. This middle ground highlights a cautious optimism toward integrating AI into society.
The researchers also showed occupation-specific differences in fear. Judges consistently ranked as the most feared AI occupation in nearly all countries, reflecting concerns about fairness, transparency, and moral judgment. Conversely, AI-driven journalists were the least feared, likely because people retain autonomy over how they engage with the information provided by journalists, unlike judicial decisions, which offer limited personal discretion. Other roles, such as AI-driven doctors and care workers, elicited intense fears in some countries due to concerns about AI's lack of empathy and emotional understanding.
This aligns with the findings of an earlier study on AI managers, where researchers identified initial indications that people react particularly negatively to AI managers, as compared to AI co-workers or AI tools that assist work. This adverse reaction was particularly strong in management areas requiring human abilities, such as empathetic listening or respectful behavior (Dong, Bonnefon, & Rahwan, 2024).
"Adverse effects can follow whenever AI is deployed in a new occupation. An important task is to find a way to minimize adverse effects, maximize positive effects, and reach a state where the balance of effects is ethically acceptable", says first author Mengchen Dong, a research scientist at the Center for Human and Machines at the Max Planck Institute for Human Development. The study identifies a critical link between fear and the mismatch between occupational expectations and AI's perceived capabilities, offering a framework to guide culturally sensitive AI development.
By understanding what people value in human-centric roles, developers and policymakers can create and communicate about AI technologies in ways that build trust and acceptance. "A one-size-fits-all approach overlooks critical cultural and psychological factors, potentially adding barriers to the adoption of beneficial AI technologies across different societies and cultures," adds co-author Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development.
The study also highlights practical strategies for alleviating fears. For instance, concerns about AI doctors lacking sincerity might be addressed through increased transparency in decision-making and positioning AI as a support tool for human practitioners rather than a replacement. Similarly, fears about AI judges could be mitigated by focusing on fairness-enhancing algorithms and public education campaigns that demystify how AI systems operate.
Dong and her colleagues are continuing this work by exploring how utopian and dystopian visions of AI influence present-day attitudes in different countries. These ongoing efforts aim to deepen the understanding of human-AI interaction and guide the ethical and culturally informed deployment of AI systems worldwide.
In brief:
- The study with over 10,000 participants in 20 countries reveals significant cultural differences in public fears about AI replacing humans in six occupations: doctors, judges, managers, caregivers, religious leaders, and journalists.
- Fear arises when there is a discrepancy between the assumed capabilities of AI and the skills required for the role.
- Results show that countries like India, Saudi Arabia, and the U.S. have higher levels of fear, especially regarding AI in roles like doctors and judges. Countries like Japan, China, and Turkey report lower fear levels, indicating cultural factors influence attitudes.
- The research highlights the importance of designing AI systems that align with public expectations and offering strategies to reduce fears.
Source:
Journal reference:
- Dong, M., Bonnefon, J.-F., & Rahwan, I. (2024). Toward human-centered AI management: Methodological challenges and future directions. Technovation, 131, Article 102953. DOI: 10.1016/j.technovation.2024.102953, https://psycnet.apa.org/fulltext/2025-56995-001.html