Tackling Digital Ageism in AI: A Call to Action

In a review published in Humanities and Social Sciences Communications, researchers conducted a comprehensive, multifaceted analysis of the alarming scale and implications of age-related bias propagating in artificial intelligence (AI) systems worldwide. The ambitious aim was to exhaustively understand how pervasive AI technologies may encode, produce, actively amplify, or passively reinforce ageism throughout all facets of society, known as digital ageism.

Study: Tackling Digital Ageism in AI: A Call to Action. Image credit: Vitalii Vodolazskyi/Shutterstock
Study: Tackling Digital Ageism in AI: A Call to Action. Image credit: Vitalii Vodolazskyi/Shutterstock

Ageism and Digital Ageism

Ageism refers to the egregious prejudicial attitudes, overt discriminatory practices, and insidious policies that foster profoundly negative stereotypes and perceptions of older adults. Digital ageism is the ubiquitous manifestation of ageism in the far-reaching design, development, deployment, and use of AI systems. With rapidly aging populations globally, it is an urgent ethical imperative to rigorously ensure AI does not further discriminate against or marginalize older people. However, research explicitly examining the intersections of age bias and AI remains severely limited despite the critical importance of this issue.

Scope, Methodology, and Databases

To thoroughly address this multifaceted knowledge gap, the researchers performed a scoping review, systematically searching exhaustive academic databases and extensive grey literature across disciplines. In total, 74 studies were included, comprising 49 academic and 25 grey literature sources spanning computer science, social science, law, and ethics. A comprehensive framework outlining 15 potential biases across the extensive machine-learning pipeline guided the holistic analysis.

Key Findings

The findings demonstrated that age bias frequently arises during crucial AI data curation stages. Many biases commonly stem from severe underrepresentation and egregious misrepresentation of older adults in the massive datasets used to train AI models. For instance, facial analysis benchmarks like the enormous MORPH and FG-Net databases significantly underrepresent older age groups.

Additionally, some datasets inappropriately aggregate older adults into excessively broad, homogenizing categories like “60+”, while younger groups receive more specific, narrow labels. This implies that older adults are viewed as a homogeneous monolith rather than as the richly diverse group they comprise. Without adequate representation of older adults’ heterogeneous data, AI systems fail to model them, causing substantially higher error rates accurately and effectively versus younger groups and potentially leading to real-world harm.

Risks in AI System Deployment

This review also highlighted many biases that emerge in real-world AI system deployment at scale. Language analysis models exhibited strong negative linguistic connotations and prejudicial associations with words like “elderly” and “aging” compared to more positive associations with words like “young” and “youthfulness.” As AI systems interpret real-world data from ageist societies, they risk implicitly amplifying and exponentially accelerating systemic ageism through technology.

Mitigating Representation Bias

Some studies reviewed by the authors attempted to address representation biases by simply balancing the age distributions in datasets used for training. However, these preliminary efforts demonstrate that this type of isolated data balancing alone may be insufficient. Even with balanced data, ingrained algorithmic biases can persist unchecked, highlighting the multifaceted, complex nature of insidious digital ageism. Hence, researchers concluded that more coordinated interdisciplinary efforts encompassing academia, industry, government, and civil society are urgently needed to develop impactful and scalable solutions.

Absence of Older Adult Perspectives

Critically and alarmingly absent from the vast literature were older adults’ perspectives, values, and preferences. Inclusive, large-scale participatory research is essential to thoroughly understand the real-world impacts of AI ageism across diverse groups of older people. Moreover, comprehensive solutions must proactively confront bias throughout extensive machine learning pipelines while respecting and meaningfully incorporating older people’s voices.

The Way Forward

As AI rapidly becomes further entrenched and pervasive across all human societies, preventing the marginalization and oppression of growing older populations is an immediate ethical obligation. This review underscores the urgent threat of digital ageism’s widespread risks, motivating ambitious action across technology ethics, policy, and interdisciplinary research fields. To actively harness AI’s potential to benefit society, concerted efforts grounded in principles of justice, diversity, and inclusion are needed to mitigate harm.

Eliminating insidious age bias will require sustained collaboration connecting academics, developers, civil society groups, government bodies, and most importantly, older adults themselves to incorporate considerations of inclusiveness, fairness, accountability, and transparency throughout AI design, development, deployment, and governance worldwide. This review makes it abundantly clear that this gargantuan undertaking is essential, both to uphold ethics and human rights as well as to enable cutting-edge AI systems to equally serve populations across the lifespan. Nothing less than a large-scale effort can address the profound scope of the challenge ahead if humanity hopes to eliminate digital ageism and develop AI that benefits people of all ages.

Journal reference:

 
Aryaman Pattnayak

Written by

Aryaman Pattnayak

Aryaman Pattnayak is a Tech writer based in Bhubaneswar, India. His academic background is in Computer Science and Engineering. Aryaman is passionate about leveraging technology for innovation and has a keen interest in Artificial Intelligence, Machine Learning, and Data Science.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pattnayak, Aryaman. (2023, August 21). Tackling Digital Ageism in AI: A Call to Action. AZoAi. Retrieved on October 05, 2024 from https://www.azoai.com/news/20230821/Tackling-Digital-Ageism-in-AI-A-Call-to-Action.aspx.

  • MLA

    Pattnayak, Aryaman. "Tackling Digital Ageism in AI: A Call to Action". AZoAi. 05 October 2024. <https://www.azoai.com/news/20230821/Tackling-Digital-Ageism-in-AI-A-Call-to-Action.aspx>.

  • Chicago

    Pattnayak, Aryaman. "Tackling Digital Ageism in AI: A Call to Action". AZoAi. https://www.azoai.com/news/20230821/Tackling-Digital-Ageism-in-AI-A-Call-to-Action.aspx. (accessed October 05, 2024).

  • Harvard

    Pattnayak, Aryaman. 2023. Tackling Digital Ageism in AI: A Call to Action. AZoAi, viewed 05 October 2024, https://www.azoai.com/news/20230821/Tackling-Digital-Ageism-in-AI-A-Call-to-Action.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
PeerArg System Enhances Peer Review Transparency With Argumentation and AI