A study published in the journal Scientific Reports investigates how exposure to social bots impacts various perceptions and policy preferences people hold regarding these automated accounts proliferating on social media platforms.
Social bots are automated social media accounts controlled by algorithms that impersonate and interact with real users. Social bots have become deeply entrenched across popular platforms like Twitter, Facebook and Instagram. They are deployed for various purposes, including spreading propaganda, misinformation campaigns, astroturfing manufactured support for politicians and brands, artificially inflating follower counts, and online harassment.
While estimates vary, previous studies suggest that 9-15% of Twitter accounts could be bots. Their capabilities to deceive people, influence opinions at scale, and manipulate trending algorithms threaten the integrity of digital public spheres. Heightened public skepticism and demands for regulation have emerged amid rising bot activities. However, little empirical research has examined how exposure to bots may fundamentally distort social media users' perceptions and policy preferences regarding these influential automated accounts.
About the Study
The study conducted experiments to investigate three key questions about social bots:
- How accurately can people estimate bot prevalence in social media populations?
- How reliably can people assess their ability to recognize bot accounts?
- Do people believe bots have a more substantial influence on others than themselves?
Additionally, it explored how bot exposure may alter opinions about regulating their propagation. Researchers designed bot recognition tasks displaying Twitter profiles with varying ambiguity between authentic users and bots. Over 900 participants labeled profiles as bots or humans before and after exposure. Surveys captured their perceptions of bot prevalence, self-efficacy, vulnerability, and policy stances.
Study Details
The study used computational Botometer scores and manual expert annotations to curate low-to-high bot-human ambiguity profile datasets. Two experiments utilized different combinations of political and apolitical, unambiguous and ambiguous profiles.
Participants were assigned to label profiles as bots or humans with no feedback on accuracy. Surveys before and after measured prevalence estimates, self-assessed skill in recognizing bots, the third-person gap between bot influence on self versus others, and regulation preferences.
The pooled analysis compared perceptions and preferences along three conditions:
- low ambiguity mixed profiles
- high ambiguity mixed profiles
- high ambiguity political profiles.
Comparisons shed light on the impacts of exposure ambiguity levels and profile partisanship.
Key Findings
Participants substantially overestimated bot prevalence before exposure, averaging almost 32% compared to expert figures of around 10-15%. This inflated perception significantly increased to 38% after exposure. These results suggest that prevalence biases are easily triggered and resemble a "mean-world" distrust of social media.
Initially, participants expressed high confidence in recognizing bots, rating around 4.8 out of 7. Their self-assessed skill significantly declined to 4.2 following exposure, during which no feedback was provided. While pre-exposure efficacy weakly predicted labeling performance, post-exposure self-ratings became uncorrelated with accuracy. Ironically, those most overconfident performed worse, exhibiting the Dunning-Kruger effect.
Before exposure, participants believed bots influenced others significantly more than themselves, indicative of a typical "third-person effect" bias. Exposure strengthened perceptions of influence upon self and others but further amplified the self-other gap. Feelings of personal immunity weakened while external vulnerability heightened after interactions with bots.
Preferences for strict bot regulation increased markedly from 27% to 39% following exposure. This growth is associated with declining self-efficacy around bot identification skills and escalating perceptions of influence upon people in general. The study suggests that exposure stokes reactive policy sentiments among social media users.
Implications
The experiments reveal that even minimal bot exposure severely distorts perceptions of their prevalence and agency. The proliferation of biases and effects highlights the risks of excessive automation in communications ecosystems. As social media penetrates everyday life, people's susceptibility to bot manipulation ties closely to algorithm governance and oversight attitudes. The study cautions that regulation preferences exhibit partly irrational dynamics, often stemming more from uncertainty and group attribution biases rather than rational assessments of technological risks and ethical tradeoffs.
Conclusion
This research underscores how the pervasive infiltration of social bots has set the stage for the routine triggering of multiple perceptual distortions rooted in common cognitive limitations and egotistic predispositions. It highlights an urgent need to improve public awareness, literacy, and objectivity regarding the growing role of automated actors occupying influential network positions and targeting vulnerabilities in human psychology.
Addressing complex social bot phenomena necessitates multidimensional initiatives spanning technological countermeasures, policy deterrence of malicious deployments, user empowerment through education, and ongoing investigation of societal impacts as human-bot interactions intensify amid accelerating AI infusion into social technologies.
Journal reference:
- Yan, H. Y., Yang, K.-C., Shanahan, J., & Menczer, F. (2023). Exposure to social bots amplifies perceptual biases and regulation propensity. Scientific Reports, 13(1), 20707. https://doi.org/10.1038/s41598-023-46630-x, https://www.nature.com/articles/s41598-023-46630-x