ChatGPT Sparks Optimism and Concern Among Journalists, Reveals Large-Scale Twitter Study

As ChatGPT transforms newsrooms, journalists express optimism about its potential to ease workloads, but underlying concerns about AI’s long-term impact on their profession persist.

Image Credit: PabloLagarto / Shutterstock

In an article recently posted on the arXiv preprint* server, researchers investigated how journalists emotionally responded to the release of chat generative pre-trained transformer (ChatGPT), a generative artificial intelligence (AI) tool. They aimed to analyze approximately one million tweets or X posts from journalists at major U.S. news outlets to track changes in sentiment and emotional tone before and after ChatGPT's launch.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Background

Generative AI, including large language models (LLMs) like ChatGPT, Claude Gemini, Llama, Mixtral, DALL·E, Sora, and Imagen, represents a significant technological advancement. These models can quickly generate new content, such as text, images, audio, video, or code, on a large scale.

ChatGPT, developed by OpenAI, became highly popular after its public release in November 2022. Within a week, it reached one million users and had about 100 million active users by January 2023, making it one of the fastest-growing consumer applications ever. This rapid adoption highlights the transformative potential of generative AI across various industries, including journalism, where it can automate routine tasks, boost creativity, and enhance efficiency.

About the Research

In this paper, the authors aimed to understand how journalists, as key interpreters of technological innovations, emotionally responded to the rise of generative AI. They collected tweets from 4,071 journalists at 18 major U.S. news outlets, covering two months before and after ChatGPT’s launch, resulting in a dataset of 959,380 tweets.

Using the Linguistic Inquiry and Word Count (LIWC) tool, the study analyzed the emotional content of these tweets. LIWC is a well-established text analysis program that categorizes words into social, psychological, and grammatical categories, allowing for a detailed examination of emotional expression.

The researchers focused on specific positive and negative emotions and overall sentiment or tone. They hypothesized that ChatGPT's introduction would generate significant emotional responses from journalists due to its potential impact on their profession. The study aimed to determine whether journalists’ emotions were more optimistic, concerned, or skeptical about the new technology.

Key Outcomes

The analysis revealed several key findings. First, journalists showed a notable increase in positive emotions following the launch of ChatGPT. This suggests that many viewed the technology as beneficial because it can automate repetitive tasks and allow journalists to focus on more creative aspects of their work.

However, the researchers also noted that these positive emotions may be driven by initial excitement, as they reflect the broader utopian narrative that often accompanies new technologies.

Additionally, the overall tone of the tweets became more positive after the launch, reflecting a general sense of optimism and acceptance toward ChatGPT. Yet, this positivity was not universal.

In contrast, negative emotions remained stable before and after the launch, indicating that concerns about ChatGPT were present but not dominant. Concerns about the potential long-term impacts, including job displacement and the ethical implications of AI, were expressed by some journalists, highlighting the presence of dystopian views.

The negative tone in tweets decreased after the launch, further highlighting the positive reception of ChatGPT among journalists. While positive emotions were prevalent, the researchers acknowledged that negative emotions, such as fear of automation and skepticism about AI’s potential to disrupt journalism, were not entirely absent and could resurface as the technology evolves. The study also found a decrease in overall negative tone, suggesting that worries about the technology’s risks were less prominent in journalists’ discussions.

To ensure the validity of these findings, the authors accounted for factors such as seasonal mood changes and public interest in ChatGPT, using Google Trends data and LIWC’s religion category as proxies. Even after adjusting for these factors, the positive trends in emotion and tone remained significant. Nevertheless, the researchers urged caution in interpreting these results, as longer-term impacts on journalists' emotions and work processes may differ as AI technologies become more integrated into newsrooms.

Applications

This research has several important implications. The positive reaction of ChatGPT suggests that journalists are open to using generative AI in their work, which could make news production more efficient. With AI handling routine tasks, journalists can focus on investigative and creative reporting.

Additionally, journalists shape public narratives, so their favorable response to ChatGPT could encourage wider acceptance of AI in various sectors. However, the presence of underlying concerns and skepticism indicates that not all journalists view this technology purely in a positive light.

Understanding journalists’ reactions to AI also helps predict how society might respond to new technologies. Positive emotions among journalists could indicate a wider acceptance and use of AI in different fields. Conversely, the stable presence of negative emotions suggests that journalists may also play a role in raising awareness about the risks and ethical challenges AI presents.

Furthermore, this research sets the groundwork for future studies on the long-term effects of generative AI on journalism and other industries, highlighting the need to track how these technologies evolve and impact professional practices and public discussions. The researchers emphasize that while optimism is prevalent now, the real effects on journalism and the broader media industry may take time to fully emerge.

Conclusion

In summary, the paper provided valuable insights into journalists' emotional reactions to ChatGPT's introduction. The findings showed an initial wave of optimism and positive sentiment, indicating that journalists saw potential benefits in generative AI, such as automating routine tasks and enhancing creative work.

However, the authors also acknowledged that negative emotions related to the risks of generative AI were present and might grow as the technology becomes more embedded in the journalism industry. This positive reaction highlights the important role journalists play in shaping public views on new technologies.

However, the authors also emphasized the need for ongoing research to understand the long-term impacts of generative AI. As these technologies develop, monitoring their effects on journalism and other fields will be crucial to maximize benefits and address any emerging challenges.

Ultimately, the researchers argue that while the initial response has been optimistic, the narrative surrounding AI is far from settled. Both utopian and dystopian outcomes remain possible, and journalists will continue to play a critical role in shaping these discussions.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Journal reference:
  • Preliminary scientific report. Lewis, S, C., & et, al. Journalists, Emotions, and the Introduction of Generative AI Chatbots: A Large-Scale Analysis of Tweets Before and After the Launch of ChatGPT. arXiv, 2024, 2409, 8761. DOI: 10.48550/arXiv.2409.08761, https://arxiv.org/abs/2409.08761
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, September 18). ChatGPT Sparks Optimism and Concern Among Journalists, Reveals Large-Scale Twitter Study. AZoAi. Retrieved on November 22, 2024 from https://www.azoai.com/news/20240918/ChatGPT-Sparks-Optimism-and-Concern-Among-Journalists-Reveals-Large-Scale-Twitter-Study.aspx.

  • MLA

    Osama, Muhammad. "ChatGPT Sparks Optimism and Concern Among Journalists, Reveals Large-Scale Twitter Study". AZoAi. 22 November 2024. <https://www.azoai.com/news/20240918/ChatGPT-Sparks-Optimism-and-Concern-Among-Journalists-Reveals-Large-Scale-Twitter-Study.aspx>.

  • Chicago

    Osama, Muhammad. "ChatGPT Sparks Optimism and Concern Among Journalists, Reveals Large-Scale Twitter Study". AZoAi. https://www.azoai.com/news/20240918/ChatGPT-Sparks-Optimism-and-Concern-Among-Journalists-Reveals-Large-Scale-Twitter-Study.aspx. (accessed November 22, 2024).

  • Harvard

    Osama, Muhammad. 2024. ChatGPT Sparks Optimism and Concern Among Journalists, Reveals Large-Scale Twitter Study. AZoAi, viewed 22 November 2024, https://www.azoai.com/news/20240918/ChatGPT-Sparks-Optimism-and-Concern-Among-Journalists-Reveals-Large-Scale-Twitter-Study.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.