AI and Social Media: Impact on Information and Discourse

Artificial intelligence (AI) has become an effective tool for managing and creating social media (SM) content. To optimize the user experience by offering personalized content, SM platforms have integrated AI into their algorithms. Yet, AI algorithms can also exacerbate filter bubbles and propagate misinformation. This article deliberates on the negative and positive impacts of AI on discourse and information in the context of SM.

Image Credit: khunkornStudio/Shutterstock
Image Credit: khunkornStudio/Shutterstock

An Overview of AI and SM

Currently, SM networks are playing a big role in shaping society as they serve as critical tools for brand promotion, marketing, and communication. In recent years, AI has become increasingly prevalent in SM, with many SM platforms like Instagram, Twitter, and Facebook adopting this technology.

Utilization of AI technology is transforming the SM landscape by changing the content distribution, consumption, and creation processes. AI algorithms analyze user behavior and data to offer personalized advertisements and content, moderate content, and improve search results. Thus, AI exerts a significant influence on the information that users are exposed to on SM platforms.

The impact of AI on SM content has crucial implications for society, individuals, and democratic processes. For example, the formation of filter bubbles and dissemination of misinformation could erode individuals' capacity to engage in public discourse and make well-informed decisions.

Filter bubbles are referred to as the phenomenon in which users are exposed only to content that validates/reinforces their existing beliefs, which results in a lack of diversity and polarization of viewpoints. Similarly, misinformation can quickly spread on SM and the application of AI to target users with tailored content can exacerbate this issue through the dissemination of false information.

Additionally, concerns have been expressed regarding the impact of AI on inventiveness and ingenuity and the effect of errors and prejudices/biases within the algorithms on information and discourse. Specifically, algorithmic bias presents one of the biggest risks as AI algorithms can inadvertently amplify and perpetuate biases within the generated content.

For instance, AI algorithms can recommend content in SM platforms that excludes specific demographic groups or reinforces stereotypes. This risk underlines the importance of ethical considerations during the deployment and development of AI in SM.

AI-powered social bots can act, think, and sense on SM platforms in ways similar to humans. However, these social bots can perform several harmful actions, like spreading wrong information to people and perpetrating scams, which is a big challenge.

AI-powered Social Bots

Digital technologies, specifically AI, are progressively becoming the key to achieve a competitive edge in business. However, firms are also facing diverse challenges with these technologies, including the challenge of malicious social bots. These bots utilize SM for content creation.

Thus, the proliferation of AI-powered social bots has led to good and bad outcomes. For instance, in emergencies such as the coronavirus disease 2019 (COVID-19) crisis, these social bots can provide crucial information for protecting society. Automated bots are also useful for merging data from many sources for further analysis.

However, malicious bots are the primary sources of disinformation on SM platforms like Twitter. For instance, hackers/rogue agents can utilize malicious bots to generate panic and anxiety during emergencies like COVID-19, spread fake news and rumors, and sway political opinions. Studies have shown that social bots play a major role in the faster dissemination of fake news in SM. For instance, only a few SM accounts/malicious social bots are specifically responsible for a substantial share of traffic carrying misinformation.

AI-based Fake News Detection

Fake news detection is considered a classification problem while AI techniques are used for this purpose. Machine learning (ML) techniques, including support vector machine (SVM), naïve Bayes (NB), and logistic regression (LR); deep learning (DL) methods, including long short-term memory (LSTM), recurrent neural networks (RNN), and convolutional neural networks (CNN); and natural language processing (NLP) techniques, including TF-IDF vectorizer and count vectorizer, have been adopted for fake news detection.

In most detection approaches, many AI techniques are combined instead of relying on a single solution. For instance, an ensemble of ML approaches, including decision tree, random forest, and extra tree classifier, was developed for effective feature extraction to classify fake news.

Similarly, a hybrid artificial bee colony approach based on k-nearest neighbors (KNN) with artificial bee colony optimization was introduced to segregate and identify buzz in Twitter and analyze user-generated content to extract useful information. A multimodal approach was developed combining visual and text analysis of online news stories to detect fake news automatically using predictive analysis to detect features strongly associated with fake news.

LR, KNN, NB, SVM, random forest analysis, linear discrimination analysis, quadratic discriminant analysis, and classification and regression tree were utilized in this approach. An explainable multimodal content-based fake news detection system has been developed using latent Dirichlet allocation topic modeling and local interpretable model-agnostic explanations.

DL-based approaches have also been developed for automatic fake news detection and early detection of fake news to classify the propagation path to mine the local and global changes of user characteristics in the diffusion path. A deep diffusive network model was built based on a set of explicit features derived from the textual information to simultaneously infer the credibility of news articles, subjects, and creators.

An automated approach based on both DL and ML techniques, including CNN, LSTM, and LR, has been developed to distinguish various cases of fake news, such as propaganda, hoaxes, and irony, while classifying and assessing news articles and claims, including linguistic cues, user credibility, and news dissemination in SM.

A method known as FNDNet has been introduced, leveraging a fusion of deep CNN and the unsupervised learning algorithm GloVe for fake news detection. Furthermore, techniques based on DL/ML and NLP have also been devised for the identification of fake news.

For instance, a computational linguistics analysis, sociocultural textual analysis, and textual classification have been performed using NLP, while DL models were used to distinguish fake from real news to address the disinformation problem. Similarly, a sentiment and frequency analysis was performed using both NLP and ML to compare basic text characteristics of fake and real news articles.

In another NLP and ML-based approach for identifying fake news, different text features were initially extracted through text processing, and then those features were incorporated into classification.

AI for Content Moderation

AI is almost indispensable for content moderation on SM due to the scale of online content moderation requirements and the technology's ability to detect coordinated inauthentic behavior. However, over-reliance on automated tools has obvious drawbacks as these AI-driven tools are ineffective at evaluating context.

Specifically, the context of usage that confers the meaning, including the social or political situation, cultural particularity, and physical signals, is lost during content moderation by AI. Algorithmic identification is inaccurate and is not expected to allow for contextual cues that are necessary to distinguish extremist speech from documentary footage/parody/legitimate protest. Thus, AI-based content moderation can remove some content that is intended to lampoon or challenge hate speech.

Content moderation decisions made by humans are often highly subjective, and AI is not expected to effectively address this issue. Additionally, AI can reproduce the bias against historically disadvantaged populations as it is often reliant on large datasets that can have information generated using biased methods.

To overcome these challenges, SM companies must design an iterative method in which human and AI content moderation are integrated, and both are fully updated and informed by a trained and sufficiently large team of internet company employees who are aware of relevant cultural, social, and political context and history.

Overall, while AI algorithms improve the user experience by providing personalized content, several concerns, including the spreading of misinformation like fake news, the creation of filter bubbles, and issues with malicious bots, remain with the deployment of AI by SM companies. ML, DL, and NLP can be used to identify fake news on SM. Other negative effects can be mitigated by promoting media literacy, human moderation, and transparency to ensure that SM platforms remain informative, diverse, and accurate.

References and Further Reading

Mohamed. E. A. S., Osman, M E., Mohamed, B. A. (2024). The Impact of Artificial Intelligence on Social Media Content. Journal of Social Sciences, 20(1), 12-16. https://doi.org/10.3844/jssp.2024.12.16

Hajli, N., Saeed, U., Tajvidi, M., Shirazi, F. (2022). Social bots and the spread of disinformation in social media: the challenges of artificial intelligence. British Journal of Management, 33(3), 1238-1253. https://doi.org/10.1111/1467-8551.12554

Aïmeur, E., Amri, S., Brassard, G. (2023). Fake news, disinformation and misinformation in social media: a review. Social Network Analysis and Mining, 13(1), 30. https://doi.org/10.1007/s13278-023-01028-5

Wilson, R. A., Land, M. K. (2020). Hate speech on social media: Content moderation in context. Connecticut Law Review, 52, 1029. https://heinonline.org/HOL/LandingPage?handle=hein.journals/conlr52&div=28&id=&page

Last Updated: May 28, 2024

Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, May 28). AI and Social Media: Impact on Information and Discourse. AZoAi. Retrieved on November 23, 2024 from https://www.azoai.com/article/AI-and-Social-Media-Impact-on-Information-and-Discourse.aspx.

  • MLA

    Dam, Samudrapom. "AI and Social Media: Impact on Information and Discourse". AZoAi. 23 November 2024. <https://www.azoai.com/article/AI-and-Social-Media-Impact-on-Information-and-Discourse.aspx>.

  • Chicago

    Dam, Samudrapom. "AI and Social Media: Impact on Information and Discourse". AZoAi. https://www.azoai.com/article/AI-and-Social-Media-Impact-on-Information-and-Discourse.aspx. (accessed November 23, 2024).

  • Harvard

    Dam, Samudrapom. 2024. AI and Social Media: Impact on Information and Discourse. AZoAi, viewed 23 November 2024, https://www.azoai.com/article/AI-and-Social-Media-Impact-on-Information-and-Discourse.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.