Propaganda's New Face: AI's Persuasive Power

In a paper published in the journal PNAS Nexus, researchers conducted a preregistered survey experiment targeting United States (US) respondents to explore the potential of large language models and artificial intelligence (AI) in generating persuasive propaganda. The study compared the persuasiveness of news articles authored by foreign propagandists with content generated by generative pre-trained transformer 3 (GPT-3) Davinci, one of these large language models.

Study: Propaganda
Study: Propaganda's New Face: AI's Persuasive Power. Image credit: Summit Art Creations/Shutterstock

Surprisingly, the findings revealed that GPT-3 could produce highly persuasive text, as indicated by participants' alignment with the propaganda's central arguments. Moreover, the researchers examined whether human intervention, specifically English fluency, could enhance the persuasiveness of propaganda. They discovered that modifying the prompt provided to GPT-3 and curating its output could significantly boost its persuasiveness, sometimes reaching levels comparable to the original propaganda. These results underscore the potential for propagandists to leverage AI to craft compelling content with minimal effort.

Related Work

In past work, researchers have highlighted the prevalence of online covert propaganda campaigns by various entities, including governments and other actors. These campaigns persist across different online platforms despite efforts to combat them. Researchers have raised concerns about how new AI tools, particularly advanced language models capable of generating text, could amplify such propaganda. While limited studies have assessed the risks associated with AI-generated propaganda, previous research has explored aspects such as credibility and recognition of AI-generated content.

AI-Generated Propaganda Study

Researchers began by selecting six articles, 151 to 308 words, identified as part of covert propaganda campaigns from Iran or Russia. These articles served as benchmarks. They then tasked GPT-3 with generating articles on the same topics. Researchers provided GPT-3 with sentences from the original articles and examples of unrelated propaganda articles to inform the style and structure of each topic. GPT-3 generated three articles per topic to prevent bias towards one output. Reseachers set criteria to discard outputs falling below or exceeding certain character limits. After this process, researchers compared the persuasiveness of the original propaganda and the GPT-3-generated articles.

The persuasiveness of the articles was measured by summarizing the main points of the original propaganda into clear statements. Researchers then presented these statements to respondents, asking for their agreement. Researchers asked a control group of respondents to indicate their agreement without reading articles on randomly selected topics. The remaining respondents were presented with articles on two original and one GPT-3-generated topic, measuring their agreement with the respective thesis statements.

The study involved 8,221 US adults interviewed using Lucid, ensuring geographic and demographic representativeness. Researchers excluded respondents who failed attention checks and those who completed the survey in under three minutes. The analysis focused on two measures of agreement: percent agreement and scaled agreement, both regressed against comprehensive indicators for each issue and article.

Ethical considerations were paramount, with the study approved by Stanford University’s Institutional Review Board and vetted by an AI-specific Ethics Review Board. Participants provided informed consent, and researchers took precautions to mitigate the risk of respondents believing falsehoods propagated by the articles. The study aimed to assess potential risks posed by AI-generated propaganda, recognizing the societal benefit of understanding these risks outweighed the possibility of inadvertently providing propagandists with new tactics.

Persuasiveness and Strategies

To establish a baseline, the researchers compared the effect of reading original propaganda against not reading any on the same topic (control). The original propaganda demonstrated high persuasiveness, nearly doubling agreement with the thesis statement compared to the control group. Similarly, GPT-3-generated propaganda was highly persuasive, significantly increasing agreement with the thesis statement compared to the control group. However, GPT-3 output was slightly less compelling than the original propaganda, though still highly persuasive across various social groups and demographics.

Propagandists could enhance efficiency by employing human-machine teaming, wherein human curators review GPT-3 output to select high-quality articles that align with the intended message. When removing GPT-3 outputs that did not advance the intended claim, agreement increased, making GPT-3 as persuasive as original propaganda. Researchers found that another strategy involved humans editing the prompts given to GPT-3, which led to comparable persuasiveness with original propaganda. Combining both curation and prompt editing further improved the persuasiveness of GPT-3-generated propaganda, surpassing original propaganda in some cases.

To address concerns about favoring GPT-3 in the persuasiveness measure, the researchers compared it with original propaganda on credibility and writing style. GPT-3 performed as well as, if not better than, original propaganda on these dimensions. GPT-3-generated content could blend seamlessly into online information environments, potentially surpassing the quality of existing foreign covert propaganda campaigns. However, with language models continually improving, future iterations of AI-generated propaganda may perform even better.

Conclusion

In summary, the experiment demonstrated that language models could produce text nearly as persuasive as real-world propaganda for US audiences, with human-machine teaming strategies further enhancing persuasiveness. These findings represent a conservative estimate as newer language models continue to improve.

Future research avenues include exploring the effects of AI-generated propaganda across various issues and developing strategies to counteract its potential misuse, such as improving detection methods and implementing behavioral interventions. Additionally, investigating the impact of labeling AI-generated content on user engagement and credibility perception remains an essential area for further study.

Journal reference:
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, March 14). Propaganda's New Face: AI's Persuasive Power. AZoAi. Retrieved on September 20, 2024 from https://www.azoai.com/news/20240314/Propagandas-New-Face-AIs-Persuasive-Power.aspx.

  • MLA

    Chandrasekar, Silpaja. "Propaganda's New Face: AI's Persuasive Power". AZoAi. 20 September 2024. <https://www.azoai.com/news/20240314/Propagandas-New-Face-AIs-Persuasive-Power.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Propaganda's New Face: AI's Persuasive Power". AZoAi. https://www.azoai.com/news/20240314/Propagandas-New-Face-AIs-Persuasive-Power.aspx. (accessed September 20, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Propaganda's New Face: AI's Persuasive Power. AZoAi, viewed 20 September 2024, https://www.azoai.com/news/20240314/Propagandas-New-Face-AIs-Persuasive-Power.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
LiveBench: A Dynamic Benchmark for Large Language Models