AI Must Adapt to Cultures to Deliver Fair Public Services

Groundbreaking book compares AI use in social assessments across nine countries, uncovering why standardized systems fall short—and how context-aware, inclusive AI can better serve diverse societies.

Book: Participatory Artificial Intelligence in Public Social Services. Image Credit: KoSSSmoSSS / ShutterstockBook: Participatory Artificial Intelligence in Public Social Services. Image Credit: KoSSSmoSSS / Shutterstock

Artificial intelligence (AI) is increasingly being used in many countries worldwide to provide public social services, assist in entitlement decisions regarding state-paid pensions and unemployment benefits, assess asylum applications, and assign places at kindergartens.

AI technology is intended to help apply fairness criteria in allocating this kind of support to individuals and to assess potential beneficiaries accordingly. However, fairness criteria vary from country to country. In India, for example, the caste system influences the distribution of social benefits.

In China, access to social services is determined by a "good citizenship" score. However, the concepts of fairness in terms of access to limited public resources also vary within Europe. These are major findings from the participative research undertaken by members of the AI FORA—Artificial Intelligence for Assessment project, which was recently published in an online open-access volume.

The project, coordinated by Johannes Gutenberg University Mainz (JGU), lasted three and a half years. Other participants included the German Research Center for Artificial Intelligence in Kaiserslautern, the University of Augsburg, and the University of Surrey in the UK. The Volkswagen Foundation provided about EUR 1.5 million to finance the project, which was completed in December 2024.

Comparison of AI-based social assessment in nine countries across four continents

The open-access volume, which extends to some 300 pages, compares the status quo and the desired scenarios of AI-supported social evaluations in nine countries across four continents: Germany, Spain, Estonia, Ukraine, the USA, Nigeria, Iran, India, and China.

"The case studies make apparent the extent to which criteria for being granted state services are determined by culture- and context-related factors. Even within societies, there are very different perspectives that are subject to constant reconsideration and negotiation. This must be reflected in the production of AI technology. Therefore, it is not sufficient to develop a single, standardized AI system for social assessment in public service provision and deploy it worldwide. We need flexible, dynamic, and adaptive systems. Their development requires the participation of all social stakeholders, including vulnerable groups, to design participatory, context-sensitive, and fair AI," emphasized Professor Petra Ahrweiler of JGU's Institute of Sociology, who coordinated the Al FORA project. The AI FORA researchers are currently preparing another publication in which they will outline the policy-relevant modeling and simulation results of the project. They aim to demonstrate how artificial intelligence can be improved to address fairness and discrimination issues in the allocation of public social services.

Sources:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.