Can AI Give Moral Advice? People Remain Skeptical

Even when AI moral advisors provide the same ethical guidance as humans, people remain skeptical—especially when AI prioritizes outcomes over individual moral principles. This reluctance highlights a fundamental challenge in AI adoption for high-stakes ethical decision-making.

Research: People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors. Image Credit: Shutterstock AIResearch: People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors. Image Credit: Shutterstock AI

Psychologists warn that AI's perceived lack of human experience and genuine understanding may limit its acceptance to make higher-stakes moral decisions. 

Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to ass

ist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice. As machines powered by AI increase in their technological capacities and move into the moral domain, it is critical that we understand how people think about such artificial moral advisors.

Research led by the University of Kent's School of Psychology explored how people would perceive these advisors and whether they would trust their judgment compared to human advisors. It found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions about moral dilemmas. 

Published in the journal Cognition, the research shows that people have a significant aversion to AMAs (vs. humans) giving moral advice even when the advice given is identical, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles (actions that could positively impact the majority). Advisors who gave non-utilitarian advice (e.g., adhering to moral rules rather than maximizing outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors-human or AI-who align with principles prioritizing individuals over abstract outcomes.

Even when participants agreed with the AMA's decision, they still anticipated disagreeing with AI in the future, indicating inherent skepticism.

Dr Jim Everett led the research at Kent alongside Dr Simon Myers at the University of Warwick. 

Dr Jim Everett, who led the research at Kent, said: 'Trust in moral AI isn't just about accuracy or consistency-it's about aligning with human values and expectations. Our research highlights a critical challenge for the adoption of AMAs and how to design systems that people truly trust. As technology advances, we might see AMAs become more integrated into decision-making processes, from healthcare to legal systems. Therefore, there is a major need to understand how to bridge the gap between AI capabilities and human trust.'

The research paper 'People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors' was published by Cognition (Everett, J. [University of Kent]; Myers, S. [University of Warwick]). 

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.