Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory

In a paper published in the journal AISeL, researchers examined the impact of labeling on user behavior in distinguishing between artificial intelligence (AI) and traditional statistical models. Software providers engage in "AI washing," marketing simplistic statistical models as AI systems, an issue they addressed.

Study: Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory. Image credit: YAKOBCHUK V/Shutterstock
Study: Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory. Image credit: YAKOBCHUK V/Shutterstock

Through experiments on regression tasks, they found that labeling human advisors with terms suggesting higher expertise increased advice uptake. In contrast, similar labels for AI did not prompt a comparable effect, challenging the assumed superiority of framing systems as AI-based. These results have significant implications for research methodologies and practical applications.

Background

The rise of AI as a universal term for intelligent systems mirrors concerns about 'AI washing,' reminiscent of apprehensions surrounding 'greenwashing.' At the same time, 'algorithm aversion' highlights a human preference for advice over potentially superior algorithmic guidance. Conflicting reports on trust in algorithms and inconsistent use of technical labels prompt this study's exploration into how positive or negative expertise-based labels affect user behavior.

The research unveils a lack of framing effect on algorithms through online experiments framing human and algorithmic advisors. Still, it reveals increased advice utilization for experts over novices, emphasizing the need to prioritize functionality over buzzwords like 'AI' for product efficacy.

Exploring Algorithm Aversion: Two Studies

Researchers conducted two distinct experiments to explore different aspects of the impact of framing on algorithm aversion. Study One targeted participants with a particular interest in algorithms, employing a 2 x 2 within-subject design across four conditions that manipulated advisor type (human vs. algorithm) and the framing of advisor expertise (high vs. low). This study focused on the nuances of labeling in the context of algorithmic advice, leveraging prior literature reviews to select appropriate labels for both algorithmic advisors and humans.

The sample for Study One comprised 127 first-year bachelor's students studying Information Systems at a technology-focused university. A series of regression tasks involving car price estimation formed the experiment's core. Participants acted as judges and received guidance from advisors labeled artificial intelligence, industry experts, statistical models, or students. These labels aimed to elicit varying perceptions of advisor expertise, influencing participants' adjustments to their initial estimations. The experimental setup included control measures like attention checks, excluding incomplete data or constant estimations, and controlling for participants' technology affinity (ATI) as a potential influencing factor.

The procedure incorporated a judge-advisor system paradigm, allowing participants to adjust their initial estimations based on advisor advice. Each participant encountered different advisor types in a randomized order, with the advice remaining constant across conditions. Real-world data on used cars from an online marketplace formed the basis for these tasks.

The study also assessed participants' perceived expertise of the advisors through Likert scale ratings. The analysis focused on the weight participants assigned to the advisor's recommendation (WOA), calculated based on adjustments made to their initial estimations in response to advisor guidance, using a linear mixed-effects regression approach.

Study Two aimed to broaden the investigation into algorithm aversion by incorporating a more diverse population. This decision stemmed from prior research indicating potential variations in algorithm aversion across social groups. Previous studies highlighted how prior experiences with algorithms could significantly impact individuals' evaluations of them.

Similarly, correlations were found between self-reported prior experiences with algorithms and decision-making tendencies when choosing between human and algorithmic advisors. These insights suggest that different social groups exhibit varying inclinations toward seeking advice from algorithms. Study Two sought to explore potential differences in algorithm aversion across diverse social groups while maintaining the same core procedures as in Study One, thereby enhancing the generalizability of the findings.

Framing Impact on Advice Utilization

The results demonstrate the pattern observed in Study One and Study Two. Study One showed a noticeable trend: higher advice utilization (WOA) levels were associated with expert-framed conditions, such as AI and industry experts, compared to non-expert conditions represented by statistical models and students. Similarly, Study Two exhibited consistent median values for artificial intelligence, statistical models, and industry experts, indicating higher advice utilization. At the same time, the student condition reflected a lower WOA median. The likelihood of participants solely relying on the advice seemed higher in Study Two, particularly compared to Study One.

The regression analyses in both studies provided insights into the influence of framing on advice-taking behavior. In Study One, when considering human advisors, higher expertise, represented by industry experts, was associated with a significant increase in advice utilization compared to lower expertise, as evidenced by the student condition.

Conversely, there was a numerical difference favoring higher expertise (AI over statistical models) for algorithmic advisors, but this difference lacked statistical significance. Study Two echoed the findings of human expert effects in advice-taking, demonstrating a significant increase in WOA for industry experts. However, in the case of algorithmic advisors, there was no statistically significant difference between high and low expertise labels, indicating a lack of an expert effect. These results collectively suggest a nuanced pattern: while expertise framing influenced advice utilization for human advisors, an apparent specialist effect was not evident for algorithmic advisors in these experimental settings.

Conclusion

To sum up, the experiments highlighted a stark contrast in the impact of expertise framing on advice utilization between human and algorithmic advisors. While framing higher expertise led to increased advice-taking from human advisors, the effect could have been more pronounced for algorithmic ones.

Although labels like AI showed numerical superiority over "statistical models," the significance was lacking, suggesting a nuanced response to framed expertise. These findings underscore the intricate nature of how individuals perceive and act on expertise cues in decision-making, signaling a need for deeper exploration into nuanced responses to expertise framing across different advisory systems.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2023, November 23). Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory. AZoAi. Retrieved on September 19, 2024 from https://www.azoai.com/news/20231123/Expertise-Framing-Effects-on-User-Behavior-A-Study-on-Algorithm-Aversion-in-AI-Advisory.aspx.

  • MLA

    Chandrasekar, Silpaja. "Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory". AZoAi. 19 September 2024. <https://www.azoai.com/news/20231123/Expertise-Framing-Effects-on-User-Behavior-A-Study-on-Algorithm-Aversion-in-AI-Advisory.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory". AZoAi. https://www.azoai.com/news/20231123/Expertise-Framing-Effects-on-User-Behavior-A-Study-on-Algorithm-Aversion-in-AI-Advisory.aspx. (accessed September 19, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2023. Expertise Framing Effects on User Behavior: A Study on Algorithm Aversion in AI Advisory. AZoAi, viewed 19 September 2024, https://www.azoai.com/news/20231123/Expertise-Framing-Effects-on-User-Behavior-A-Study-on-Algorithm-Aversion-in-AI-Advisory.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
LSA-SVM Fusion Algorithm for Enhancing Power Network Security