AI Rewrites Social Media Posts but Subtly Changes Their Emotional Tone

New research shows that AI-driven text rephrasing doesn’t just change wording—it mutes emotion, shifting online conversations in ways that could reshape how we analyze public sentiment.

Research: Echoes of authenticity: Reclaiming human sentiment in the large language model era. Image Credit: Shutterstock AIResearch: Echoes of authenticity: Reclaiming human sentiment in the large language model era. ​​​​​​​Image Credit: Shutterstock AI

Ask a Large Language Model (LLM) such as ChatGPT to summarize what people are saying about a topic, and although the model might summarize the facts efficiently, it may systematically alter the sentiment of the original text. LLMs play an increasingly prominent role in research, but rather than being a transparent window into the world, they can present and summarize content with a different tone and emphasis than the original data, potentially skewing research results.

Yi Ding and colleagues at the University of Warwick compared a climate dataset of 18,896,054 tweets that mentioned "climate change" from January 2019 to December 2021 to rephrased tweets prepared by LLMs. Their analysis applied statistical tests, including sentiment scoring with the VADER tool, to measure sentiment shifts. The authors found that the LLM-rephrased tweets tend to display a more neutral sentiment than the original texts, with both positive and negative sentiment scores significantly reduced. Notably, negative sentiment was dampened more than positive sentiment, creating a slight shift toward positivity in the overall sentiment distribution. This blunting effect occurred irrespective of the prompts employed or the sophistication of the LLMs. Even when explicitly prompted to preserve sentiment, the LLMs still reduced emotional intensity.

A similar effect occurred when LLMs were asked to rephrase Amazon reviews. Using a separate dataset of 10,000 customer reviews, the researchers found that LLM-modified reviews exhibited a measurable reduction in sentiment extremes, reinforcing the generalizability of the effect beyond social media posts.

Possible mitigation strategies include using predictive models to retroactively adjust sentiment levels. The study tested three regression-based predictive models—Linear Regression, Neural Network Regression, and Random Forest Regression—to estimate original sentiment from LLM-altered text. The results showed that these models could successfully recover much of the original human sentiment, particularly when applied to text known to have been modified by an LLM.

According to the authors, if it is not known whether a text was written by a human or an LLM, it would be more useful to work with an LLM that has been fine-tuned not to blunt the emotional content it is summarizing. The study tested this approach using both OpenAI’s fine-tuning framework and Meta’s LLAMA2 model. The fine-tuned OpenAI model outperformed LLAMA2, producing text that more closely matched original human sentiment and reducing unintended sentiment shifts.

The findings raise concerns about the reliability of sentiment-based research in the post-LLM era. The authors highlight that altered sentiment can affect not just academic studies but also business decisions, public opinion analysis, and policy-making, where sentiment data informs decision-making. As LLMs become more integrated into workflows, researchers and analysts must account for these systematic changes in sentiment.

Source:
  • PNAS Nexus
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.