In an article published in the journal PNAS Nexus, researchers investigated the potential misuse of generative artificial intelligence (AI), exemplified by Chat generative pre-trained transformer (GPT), in creating a highly scalable "manipulation machine" for political microtargeting.
Four studies demonstrated that personalized political ads, crafted based on individuals' personalities, are more effective than non-personalized ads. The research emphasized the feasibility of automatically generating and validating these ads on a large scale. The findings underscored the ethical and policy concerns surrounding the use of AI and microtargeting to tailor political messages to individuals based on their personality traits.
Background
This study focuses on pivotal events in 2016, including the Brexit referendum and Donald Trump's election, coupled with the controversial role of Cambridge Analytica in global political campaigns. Cambridge Analytica claimed success in influencing voters by exploiting psychological vulnerabilities and extracting hidden psychological attributes from online behavior and personal data for personalized messages. However, the actual impact of microtargeting remains uncertain, with conflicting evidence on its success.
The research responded to the changing landscape with the emergence of generative AI, such as ChatGPT, raising concerns about large language models (LLMs) amplifying microtargeting. The objective was to evaluate the effectiveness of a "manipulation machine" utilizing generative AI to automate personalized political ad creation based on individual personalities. Acknowledging divergent views on microtargeting efficacy, the study conducted four empirical studies to ascertain whether personality-congruent political ads outperform generic ones. The research also showcased the automated generation and validation of such ads. By examining the intersection of generative AI, personality inference, and political microtargeting, the researchers contributed to understanding the implications of rapidly advancing technology on political persuasion strategies.
Methods and materials
The authors recruited participants from a United Kingdom-based sample for four experiments: Study one-a, Study one-b, Study two-a, and Study two-b. Studies lasted approximately six minutes, and compensation varied (£1.8 for one-a, £0.90 for one-b, two-a, and two-b). Each participant evaluated 10 Facebook-format ads, rating their persuasiveness on a one–five Likert scale, using six adapted items. Concerns about self-reporting bias were addressed by focusing on attitudes and ideologies.
Studies two-a and two-b featured low/high-openness ad variations, randomly assigned. For study two, GPT-3 and ChatGPT were prompted with an openness to experience definition and instructed to rephrase ads for high/low openness. The structured prompt included the openness definition, instruction, and the original ad. The ChatGPT prompt specified not to oversell but to make it slightly more persuasive for high/low openness individuals. This detailed prompting strategy aimed to systematically create two versions of each ad, assessing the impact of personality congruence on political ad effectiveness.
Results
The study aimed to investigate the impact of personality-congruent political messages on perceived persuasion using real political ads. In study one, 10 ads were selected from 1,552 political ads on Facebook (published between December 2019 and December 2021), and participants (440 in one-a and 804 in one-b) rated their persuasiveness. A predictive model generated an "openness score" for each ad, and a matching score was calculated between participants and ads. A linear mixed model revealed a significant matching effect, indicating that deviations from personality matching reduced perceived message persuasiveness.
In study two, the study explored the automation of this process using generative AI, specifically GPT-3 and ChatGPT. The models were prompted to rephrase ads for individuals high or low in openness. The generated ads were evaluated using the same approach, and results showed a significant matching effect in Study two-b (ChatGPT), indicating that automated generative models could effectively produce personality-congruent ads at scale.
The study demonstrated that both human-rated and AI-generated ads aligned with personality traits, supporting the idea that generative AI could facilitate manipulative microtargeting on a large scale. Despite a marginal non-significant result in Study two-a (GPT-3), the overall findings suggested the potential for AI-driven automated generation of persuasive political messages tailored to individuals' personality traits. The results emphasized the need for ethical considerations and regulation in the use of such technology for political advertising.
Discussion
The study revealed that political microtargeting, automated through generative AI like ChatGPT, was effective, showing consistent albeit small effect sizes across four studies. Despite the modest impact on an individual level, the potential influence at scale was significant, especially in elections. The closed-source nature of OpenAI's products, including ChatGPT, raised concerns about transparency. However, an algorithmic validation using an open-source model yielded similar results.
The study focused on openness as a personality factor but suggested that AI-generated microtargeting messages could extend to other traits and even moral reframing of political texts. Acknowledging the democratization of this capability, the study warned of potential misuse and the need for safeguards. While discussing manipulative strategies raised concerns, the authors argued that providing evidence was crucial for informed decision-making, suggesting interventions to enhance user awareness and promote a fair digital landscape prioritizing transparency and empowerment over corporate profit.
Conclusion
The research, examining AI-driven political microtargeting's effectiveness and ethical implications, discovered that personalized political ads based on personality traits were more persuasive. Four studies, utilizing real and AI-generated ads, demonstrated consistent albeit small effects. The closed-source nature of AI models raised transparency concerns, emphasizing the need for ethical considerations and regulations. The study's findings underscored the potential misuse of generative AI in political advertising, urging interventions to enhance user awareness and promote a fair digital landscape prioritizing transparency over corporate interests.