New AI Framework Eliminates Bias and Boosts Fairness in Critical Decisions

Novel AI methodology now guarantees confidence in decision-making models, eliminating discrimination in areas like healthcare, justice, and education—paving the way for more ethical and transparent artificial intelligence.

Research: Fair prediction sets through multi-objective hyperparameter optimization. Image Credit: Konstantin Faraktinov / ShutterstockResearch: Fair prediction sets through multi-objective hyperparameter optimization. Image Credit: Konstantin Faraktinov / Shutterstock

Researchers from the Data Science and Artificial Intelligence Institute (DATAI) of the University of Navarra (Spain) have published an innovative methodology that improves the fairness and reliability of artificial intelligence models used in critical decision-making. These decisions significantly impact people's lives or the operations of organizations, as occurs in areas such as health, education, justice, or human resources. 

The team, formed by researchers Alberto García Galindo, Marcos López De Castro, and Rubén Armañanzas Arnedillo, has developed a new theoretical framework that optimizes the parameters of reliable machine learning models. These models are AI algorithms that transparently make predictions, ensuring certain confidence levels. In this contribution, the researchers propose a methodology for reducing inequalities related to sensitive attributes such as race, gender, or socioeconomic status.

Machine Learning, one of the leading scientific journals in artificial intelligence and machine learning, presents this study. It combines advanced prediction techniques (conformal prediction) with algorithms inspired by natural evolution (evolutionary learning). The derived algorithms offer rigorous confidence levels and ensure equitable coverage among different social and demographic groups. Thus, this new AI framework provides the same reliability level regardless of individuals' characteristics, ensuring fair and unbiased results.

"The widespread use of artificial intelligence in sensitive fields has raised ethical concerns due to possible algorithmic discriminations," explains Armañanzas Arnedillo, principal investigator of DATAI at the University of Navarra. "Our approach enables businesses and public policymakers to choose models that balance efficiency and fairness according to their needs, or responding to emerging regulations. This breakthrough is part of the University of Navarra's commitment to fostering a responsible AI culture and promoting ethical and transparent use of this technology."

Application in real scenarios

Researchers tested this method on four benchmark datasets with different characteristics from real-world domains related to economic income, criminal recidivism, hospital readmission, and school applications. The results showed that the new prediction algorithms significantly reduced inequalities without compromising the accuracy of the predictions. "In our analysis, we found, for example, striking biases in the prediction of school admissions, evidencing a significant lack of fairness based on family financial status," notes Alberto García Galindo, DATAI predoctoral researcher at the University of Navarra and first author of the paper. "In turn, these experiments demonstrated that, on many occasions, our methodology manages to reduce such biases without compromising the model's predictive ability. Specifically, with our model, we found solutions in which discrimination was practically completely reduced while maintaining prediction accuracy." The methodology offers a 'Pareto front' of optimal algorithms, "which allows us to visualize the best available options according to priorities and to understand, for each case, how algorithmic fairness and accuracy are related."

According to the researchers, this innovation has vast potential in sectors where AI must support reliable and ethical critical decision-making. Garcia Galindo points out that their method "not only contributes to fairness but also enables a deeper understanding of how the configuration of models influences the results, which could guide future research in the regulation of AI algorithms." The researchers have made the code and data from the study publicly available to encourage further research applications and transparency in this emerging field.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.