Impact of Social Bot Exposure on Perceptions and Policy Preferences

A study published in the journal Scientific Reports investigates how exposure to social bots impacts various perceptions and policy preferences people hold regarding these automated accounts proliferating on social media platforms.

Study: Impact of Social Bot Exposure on Perceptions and Policy Preferences. Image credit: sdecoret/Shutterstock
Study: Impact of Social Bot Exposure on Perceptions and Policy Preferences. Image credit: sdecoret/Shutterstock

Social bots are automated social media accounts controlled by algorithms that impersonate and interact with real users. Social bots have become deeply entrenched across popular platforms like Twitter, Facebook and Instagram. They are deployed for various purposes, including spreading propaganda, misinformation campaigns, astroturfing manufactured support for politicians and brands, artificially inflating follower counts, and online harassment.

While estimates vary, previous studies suggest that 9-15% of Twitter accounts could be bots. Their capabilities to deceive people, influence opinions at scale, and manipulate trending algorithms threaten the integrity of digital public spheres. Heightened public skepticism and demands for regulation have emerged amid rising bot activities. However, little empirical research has examined how exposure to bots may fundamentally distort social media users' perceptions and policy preferences regarding these influential automated accounts.

About the Study

The study conducted experiments to investigate three key questions about social bots:

  • How accurately can people estimate bot prevalence in social media populations?
  • How reliably can people assess their ability to recognize bot accounts?
  • Do people believe bots have a more substantial influence on others than themselves?

Additionally, it explored how bot exposure may alter opinions about regulating their propagation. Researchers designed bot recognition tasks displaying Twitter profiles with varying ambiguity between authentic users and bots. Over 900 participants labeled profiles as bots or humans before and after exposure. Surveys captured their perceptions of bot prevalence, self-efficacy, vulnerability, and policy stances.

Study Details

The study used computational Botometer scores and manual expert annotations to curate low-to-high bot-human ambiguity profile datasets. Two experiments utilized different combinations of political and apolitical, unambiguous and ambiguous profiles.

Participants were assigned to label profiles as bots or humans with no feedback on accuracy. Surveys before and after measured prevalence estimates, self-assessed skill in recognizing bots, the third-person gap between bot influence on self versus others, and regulation preferences.

The pooled analysis compared perceptions and preferences along three conditions:

  • low ambiguity mixed profiles
  • high ambiguity mixed profiles
  • high ambiguity political profiles.

Comparisons shed light on the impacts of exposure ambiguity levels and profile partisanship.

Key Findings

Participants substantially overestimated bot prevalence before exposure, averaging almost 32% compared to expert figures of around 10-15%. This inflated perception significantly increased to 38% after exposure. These results suggest that prevalence biases are easily triggered and resemble a "mean-world" distrust of social media.

Initially, participants expressed high confidence in recognizing bots, rating around 4.8 out of 7. Their self-assessed skill significantly declined to 4.2 following exposure, during which no feedback was provided. While pre-exposure efficacy weakly predicted labeling performance, post-exposure self-ratings became uncorrelated with accuracy. Ironically, those most overconfident performed worse, exhibiting the Dunning-Kruger effect.

Before exposure, participants believed bots influenced others significantly more than themselves, indicative of a typical "third-person effect" bias. Exposure strengthened perceptions of influence upon self and others but further amplified the self-other gap. Feelings of personal immunity weakened while external vulnerability heightened after interactions with bots.

Preferences for strict bot regulation increased markedly from 27% to 39% following exposure. This growth is associated with declining self-efficacy around bot identification skills and escalating perceptions of influence upon people in general. The study suggests that exposure stokes reactive policy sentiments among social media users.

Implications

The experiments reveal that even minimal bot exposure severely distorts perceptions of their prevalence and agency. The proliferation of biases and effects highlights the risks of excessive automation in communications ecosystems. As social media penetrates everyday life, people's susceptibility to bot manipulation ties closely to algorithm governance and oversight attitudes. The study cautions that regulation preferences exhibit partly irrational dynamics, often stemming more from uncertainty and group attribution biases rather than rational assessments of technological risks and ethical tradeoffs.

Conclusion

This research underscores how the pervasive infiltration of social bots has set the stage for the routine triggering of multiple perceptual distortions rooted in common cognitive limitations and egotistic predispositions. It highlights an urgent need to improve public awareness, literacy, and objectivity regarding the growing role of automated actors occupying influential network positions and targeting vulnerabilities in human psychology.

Addressing complex social bot phenomena necessitates multidimensional initiatives spanning technological countermeasures, policy deterrence of malicious deployments, user empowerment through education, and ongoing investigation of societal impacts as human-bot interactions intensify amid accelerating AI infusion into social technologies.

Journal reference:
Aryaman Pattnayak

Written by

Aryaman Pattnayak

Aryaman Pattnayak is a Tech writer based in Bhubaneswar, India. His academic background is in Computer Science and Engineering. Aryaman is passionate about leveraging technology for innovation and has a keen interest in Artificial Intelligence, Machine Learning, and Data Science.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pattnayak, Aryaman. (2023, November 29). Impact of Social Bot Exposure on Perceptions and Policy Preferences. AZoAi. Retrieved on November 21, 2024 from https://www.azoai.com/news/20231129/Impact-of-Social-Bot-Exposure-on-Perceptions-and-Policy-Preferences.aspx.

  • MLA

    Pattnayak, Aryaman. "Impact of Social Bot Exposure on Perceptions and Policy Preferences". AZoAi. 21 November 2024. <https://www.azoai.com/news/20231129/Impact-of-Social-Bot-Exposure-on-Perceptions-and-Policy-Preferences.aspx>.

  • Chicago

    Pattnayak, Aryaman. "Impact of Social Bot Exposure on Perceptions and Policy Preferences". AZoAi. https://www.azoai.com/news/20231129/Impact-of-Social-Bot-Exposure-on-Perceptions-and-Policy-Preferences.aspx. (accessed November 21, 2024).

  • Harvard

    Pattnayak, Aryaman. 2023. Impact of Social Bot Exposure on Perceptions and Policy Preferences. AZoAi, viewed 21 November 2024, https://www.azoai.com/news/20231129/Impact-of-Social-Bot-Exposure-on-Perceptions-and-Policy-Preferences.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
YesBut Dataset Challenges Vision-Language Models to Understand Satire