Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population

In an article published in the journal PLOS One, researchers delved into the preferences of the adult Danish population regarding protective features of artificial intelligence (AI) systems across diverse decision-making scenarios in public and commercial sectors. With the increasing implementation of AI in various domains, understanding public preferences becomes crucial for responsible AI deployment.

Study: Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population. Image credit: Summit Art Creations/Shutterstock
Study: Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population. Image credit: Summit Art Creations/Shutterstock

Background

AI systems, underpinned by deep learning architectures, are increasingly pivotal as decision-support mechanisms across diverse sectors, prompting a surge in practical implementations. Existing research, predominantly focused on AI in medical contexts, reveals that patients and the public exhibit discernible preferences for specific features concerning AI systems. These preferences revolve around factors like explainability, accuracy, and the role of human decision-makers within the decision chain. Notably, patients often express concerns related to ethics, privacy, data security, bias, accuracy, and the potential detachment of physicians from healthcare processes when AI is involved.

A systematic review of AI ethics in medicine underscores the moderate support for medical AI among patients and their families. Ethical concerns are identified as significant barriers, emphasizing worries about responsibility, privacy, data security, bias, accuracy, and the perceived diminishment of human interactions in healthcare. Patients generally prefer physician involvement in diagnosis, decision-making, and clinical communication.

Crucially, the identified features preferred by patients can be viewed as 'protective,' as they safeguard patient interests by ensuring transparency in AI usage, optimal performance, non-discriminatory outcomes, and a continued role for human professionals in decision-making. While these preferences have been extensively studied in the medical AI domain, this research expanded the inquiry to encompass a broader spectrum of decision-making scenarios in both public and commercial sectors.

By exploring public preferences across different use cases and demographic factors such as age, gender, and education, the study aimed to discern variations in protective feature preferences. Additionally, it investigated the influence of respondents' general attitudes towards AI, trust in human decision-makers, and self-assessed knowledge about AI. This cross-sectional survey, conducted with a representative sample of the adult Danish population, sought to unravel the intricate fabric of public preferences, contributing nuanced insights for the responsible development and deployment of AI technologies in diverse societal contexts.

Materials and Methods

An e-questionnaire was designed to explore preferences for AI decision support across eight distinct contexts, encompassing both public and commercial spheres. Decision scenarios included medical diagnostics, early retirement pension, consumer loan approval, police investigation of home burglary, determination of car insurance premium, ambulance dispatch, issuance of parking tickets, and allocation of a place in children's nursery. Participants, drawn from Kantar's representative panel of the adult Danish population, received invitations and reminders in December 2021.

The questionnaire commenced with an introduction on AI systems and their decision-making role, followed by context-specific scenarios. Respondents rated the importance of five protective features for each context: knowledge about AI involvement, human responsibility, non-discrimination, human explainability, and system performance comparable to a human decision-maker. Trust in human decision-makers for each context was also assessed.

To mitigate sequencing effects, contexts were presented in a randomized order. The questionnaire included scales gauging general expectations about societal AI effects and self-assessed AI knowledge. Demographic information on gender, age, region, and education level was collected. Statistical analysis was performed using IBM SPSS 29, which employed non-parametric tests due to left-skewed response distributions. Mann-Whitney U tests, Kruskal-Wallis tests, Friedmans ANOVA, and Spearman ordinal correlation were applied, with Bonferroni correction for multiple significance tests.

Results

The e-questionnaire, distributed to 900 Kantar panel members, achieved a 71.4% response rate, with 643 completing it. Sample efficiency analysis indicated a 90.92% concordance with desired sample characteristics. Respondents were fairly balanced in gender (50.2% men, 49.8% women) and exhibited diverse age and education distributions.

Despite self-reported low AI knowledge (29.8% knowing 'Nothing,' 26.5% 'A little'), respondents consistently emphasized the importance of protective AI features across diverse decision-making contexts. While variations existed across contexts, the mean scores consistently fell below 2, indicating agreement that these features are crucial. Statistically significant differences emerged compared to the 'medical diagnosis' base case, and variations were noted between genders, age groups, and education levels.

Analysis of general expectation scales revealed a mix of positive and negative sentiments toward AI's future societal effects. Negative expectations correlated significantly with positive evaluations of protective features, indicating a nuanced perspective. Positive correlations were observed between summative scales and trust in the human decision-maker, suggesting heightened importance of protective features when trust declined. Higher importance was associated with lower self-rated AI knowledge in specific contexts (consumer loans, car insurance, ambulance dispatch).

These findings underscored the nuanced interplay between societal expectations, trust, and personal AI knowledge in shaping preferences for protective features across diverse decision contexts. The study provided valuable insights for tailoring AI implementations to align with public preferences and fostering responsible AI development.

Discussion

The high response rate and sample efficiency enhanced the study's credibility, indicating a reasonable approximation to the Danish adult population. While hypothetical scenarios might be a methodological weakness, the contexts presented aligned with respondents' personal or mediated experiences.

The study acknowledged a potential limitation in not exploring AI in contexts with low perceived impact, suggesting a need for further research in such areas. Differences in the perceived importance of protective features across decision contexts were confirmed, with nuances. These features were consistently crucial but gained prominence in high-stakes scenarios, notably in medical diagnostics.

Demographic trends aligned with expectations, with women, older individuals, those with high education levels, lower AI knowledge, and negative AI sentiment rating protective features more highly. In Denmark's high-trust society, the importance of protective features increased as trust in the human decision-maker decreased, emphasizing the nuanced relationship between trust, societal expectations, and preferences for AI safeguards, particularly in low-trust societies.

Conclusion

In conclusion, this study revealed consistent importance assigned to the protective features of AI across diverse decision contexts. While variations existed, especially in high-stakes scenarios like medical diagnostics, the overall significance underscored public concern for responsible AI implementation. Demographic influences, trust dynamics, and societal expectations further shaped preferences. These findings provided valuable insights for policymakers and developers seeking to align AI systems with public expectations in nuanced decision-making contexts.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2023, December 08). Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20231208/Public-Preferences-for-Protective-Features-in-AI-Decision-Making-Insights-from-the-Adult-Danish-Population.aspx.

  • MLA

    Nandi, Soham. "Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population". AZoAi. 22 November 2024. <https://www.azoai.com/news/20231208/Public-Preferences-for-Protective-Features-in-AI-Decision-Making-Insights-from-the-Adult-Danish-Population.aspx>.

  • Chicago

    Nandi, Soham. "Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population". AZoAi. https://www.azoai.com/news/20231208/Public-Preferences-for-Protective-Features-in-AI-Decision-Making-Insights-from-the-Adult-Danish-Population.aspx. (accessed November 22, 2024).

  • Harvard

    Nandi, Soham. 2023. Public Preferences for Protective Features in AI Decision-Making: Insights from the Adult Danish Population. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20231208/Public-Preferences-for-Protective-Features-in-AI-Decision-Making-Insights-from-the-Adult-Danish-Population.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
ByteDance Unveils Revolutionary Image Generation Model That Sets New Benchmark