AI’s Hidden Bias: Study Finds ChatGPT Skews Left in Text and Images

A new study uncovers how ChatGPT systematically aligns with left-wing perspectives while selectively refusing right-leaning viewpoints, raising questions about AI transparency, fairness, and its role in shaping public discourse.

Research: Assessing political bias and value misalignment in generative artificial intelligence. Image Credit: Lightspring / ShutterstockResearch: Assessing political bias and value misalignment in generative artificial intelligence. Image Credit: Lightspring / Shutterstock

Generative AI, a technology that is developing at breakneck speed, raises concerns about potential risks that could influence public trust and democratic values, according to a study led by the University of East Anglia (UEA).

In collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper, both in Brazil, the research showed that ChatGPT exhibits biases in its text and image outputs, leaning toward left-wing political values on most themes, though it occasionally aligns with conservative viewpoints. This raises questions about fairness and accountability in its design.

The study revealed that ChatGPT demonstrated a pattern where it was more likely to produce left-leaning content while occasionally limiting engagement with certain right-leaning themes. This imbalance in responses across different political perspectives underscores how such systems can potentially distort public discourse and exacerbate societal divides.

Dr. Fabio Motoki, a Lecturer in Accounting at UEA's Norwich Business School, is the lead researcher on the paper Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence, published today in the Journal of Economic Behavior and Organization.

Dr. Motoki said, "Our findings suggest that generative AI tools are far from neutral. They reflect biases that could unintentionally shape perceptions and policies."

As AI becomes an integral part of journalism, education, and policymaking, the study calls for transparency and regulatory safeguards to ensure alignment with societal values and democratic principles.

Generative AI systems like ChatGPT are reshaping how information is created, consumed, interpreted, and distributed across various domains. While innovative, these tools risk reinforcing ideological perspectives and influencing societal values in ways that are not fully understood or regulated.

Co-author Dr. Pinho Neto, a Professor in Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications.

Dr. Pinho Neto said: "Unchecked biases in generative AI may deepen existing societal divides, potentially eroding trust in institutions and democratic processes."

"The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms."

The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and machine learning tools.

First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.

"By comparing ChatGPT's answers to real survey data, we found systematic deviations toward left-leaning perspectives on most topics, though some themes, such as military supremacy, showed right-leaning tendencies," said Dr. Motoki. "Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings."

In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.

The study also used RoBERTa, a different large language model, to compare ChatGPT's text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in a majority of cases, it also reflected conservative perspectives on some topics.

The final test explored ChatGPT's image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analyzed using GPT-4 Vision and corroborated through Google's Gemini for textual prompt analysis.

"While image generation mirrored textual biases, we found a troubling trend," said Victor Rangel, co-author and a Masters' student in Public Policy at Insper. "For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation."

To address these refusals, the team employed a 'jailbreaking' strategy to generate the restricted images.

"The results were revealing," Mr. Rangel said. "There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals."

Dr. Motoki emphasized the broader significance of this finding: "This contributes to debates around constitutional protections like the US First Amendment and the role of AI in content moderation, particularly in relation to fairness doctrines."

The study's methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. By using statistical methods such as bootstrapping and regression analysis, the researchers ensured the robustness of their findings. These findings highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences.

Fabio Motoki, Valdemar Pinho Neto, and Victor Rangel's paper, Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence, was published in the Journal of Economic Behavior and Organization on February 4, 2025.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Can’t Replace Human Creativity, But It Can Enhance It