Why People Trust AI for Music Picks but Not for Medical Decisions

As AI takes on more decision-making roles, a study reveals that people with better statistical literacy are less likely to trust it in high-stakes situations—raising questions about how AI should be integrated into critical fields like healthcare and employment.

Research: Factors influencing trust in algorithmic decision-making: an indirect scenario-based experiment. Image Credit: Stock-Asso / ShutterstockResearch: Factors influencing trust in algorithmic decision-making: an indirect scenario-based experiment. Image Credit: Stock-Asso / Shutterstock

Artificial intelligence (AI) adeptly serves content that matches our preferences and past behaviors, from tailored Netflix recommendations to personalized Facebook feeds. But while a restaurant tip or two is handy, how comfortable would you be if AI algorithms were in charge of your medical expert or new hire?

A new study from the University of South Australia shows that most people are likelier to trust AI in situations with low stakes, such as music suggestions, but less likely to trust AI in high-stakes situations, such as medical decisions.

However, those with poor statistical literacy or little familiarity with AI were just as likely to trust algorithms for trivial choices as they were for critical decisions.

Researchers assessed responses from nearly 2000 participants across 20 countries and found that statistical literacy affects trust differently. People who understand that AI algorithms work through pattern-based predictions (but also have risks and biases) were more skeptical of AI in high-stakes situations but less so in low-stakes situations.

They also found that older people and men were generally more cautious of algorithms than people in highly industrialized nations like Japan, the US, and the UK.

Understanding how and when people trust AI algorithms is essential, particularly as society continues introducing and adopting machine-learning technologies.

Relationship between data (D), algorithms (A) and artificial intelligence (AI) (ADA for short). Big data is used to feed algorithms, which in turn form the core of AI agents. There are four important aspects to note: (i) big data revolves (in one way or another) around human-related states, processes and events, (ii) such data is the substance of any algorithm, (iii) algorithms are the drivers of AI agents, and (iv) algorithmic/AI behaviors and outputs have implications for how new data is built and how humans (H) relate to ADA technologies in general. H1 and H2 are a subset of humans with specialized skills relevant to ADA. Source: the authors [icons from Font Awesome Free 5.2.0 by @fontawesome–https://fontawesome.com (https://commons.wikimedia.org/wiki/File:Font_Awesome_5_solid_robot.svg) and Mozilla (https://commons.wikimedia.org/wiki/File:Fxemoji_u1F6BB.svg)].

Relationship between data (D), algorithms (A) and artificial intelligence (AI) (ADA for short). Big data is used to feed algorithms, which in turn form the core of AI agents. There are four important aspects to note: (i) big data revolves (in one way or another) around human-related states, processes and events, (ii) such data is the substance of any algorithm, (iii) algorithms are the drivers of AI agents, and (iv) algorithmic/AI behaviors and outputs have implications for how new data is built and how humans (H) relate to ADA technologies in general. H1 and H2 are a subset of humans with specialized skills relevant to ADA. Source: the authors [icons from Font Awesome Free 5.2.0 by @fontawesome–https://fontawesome.com (https://commons.wikimedia.org/wiki/File:Font_Awesome_5_solid_robot.svg) and Mozilla (https://commons.wikimedia.org/wiki/File:Fxemoji_u1F6BB.svg)].

AI adoption rates have increased dramatically, with 72% of organizations now using AI in their business.

Lead author and human and artificial cognition expert Dr. Fernando Marmolejo-Ramos says the speed at which smart technologies are being used to outsource decisions is outpacing our understanding of how to integrate them into society successfully.

"Algorithms are becoming increasingly influential in our lives, impacting everything from minor choices about music or food, to major decisions about finances, healthcare, and even justice," Dr Marmolejo-Ramos says.

"But the use of algorithms to help make decisions implies that there should be some confidence in their reliability. That's why it's so important to understand what influences people's trust in algorithmic decision-making.

"Our research found that in low-stakes scenarios, such as restaurant recommendations or music selection, people with higher levels of statistical literacy were more likely to trust algorithms.

"Yet, when the stakes were high, for things like health or employment, the opposite was true; those with better statistical understanding were less likely to place their faith in algorithms."

UniSA's Dr Florence Gabriel says there should be a concentrated effort to promote statistical and AI literacy among the general population so that people can better judge when to trust algorithmic decisions.

"An AI-generated algorithm is only as good as the data and coding that it's based on," Dr Gabriel says.

"We only need to look at the recent banning of DeepSeek to grasp how algorithms can produce biased or risky data depending on the content that it was built upon.

"On the flip side, when an algorithm has been developed through a trusted and transparent source, such as the custom-build EdChat chatbot for South Australian schools, it's more easily trusted.

"Learning these distinctions is important. People need to know more about how algorithms work, and we need to find ways to deliver this in clear, simple ways that are relevant to the user's needs and concerns.

"People care about what the algorithm does and how it affects them. We need clear, jargon-free explanations that align with the user's concerns and context. That way, we can help people engage responsibly with AI."

Source:
Journal reference:
  • Marrone, R., Korolkiewicz, M., Gabriel, F., Siemens, G., Joksimovic, S., Yamada, Y., Mori, Y., Rahwan, T., Sahakyan, M., Sonna, B., Meirmanov, A., Bolatov, A., Som, B., Ndukaihe, I., Arinze, N. C., Kundrát, J., Skanderová, L., Ngo, V., Nguyen, G., . . . Tejada, J. (2025). Factors influencing trust in algorithmic decision-making: An indirect scenario-based experiment. Frontiers in Artificial Intelligence, 7, 1465605. DOI: 10.3389/frai.2024.1465605, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1465605/full

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.