In an article recently published in the journal AI, researchers explored Artificial Intelligence (AI)'s role in transforming the mortgage market to promote homeownership inclusivity for marginalized communities, emphasizing ethical and unbiased implementation.
Background
AI and machine learning (ML) are revolutionizing mortgage processes, with lenders employing advanced marketing techniques, allowing computers to emulate human decision-making, promoting AI-driven customer interactions, and data-driven credit assessments. While these advancements offer many benefits, concerns related to bias and discrimination are still prevalent.
To address these concerns, the authors of the present study provide a framework to analyze the efficacy of AI technologies in the mortgage industry by exploring the role of AI in revolutionizing the mortgage market, with a focus on addressing historical barriers to homeownership for Black, Brown, and lower-income communities. They outline criteria encompassing societal, ethical, legal, and practical considerations for developing and implementing AI models in this context.
The SCALE Framework
This paper highlights several AI applications in the mortgage market, such as digital marketing, the integration of non-traditional data in credit scoring, AI property valuation, and loan underwriting models. The S.C.A.L.E. framework presented by the authors aims to evaluate the fairness and equity of AI and other technologies within the context of mortgage lending. It consists of five key criteria:
- Societal values: AI models in mortgage lending should align with prevailing legal and ethical paradigms, taking into consideration societal priorities, such as racial equity and social justice.
- Contextual integrity: The appropriateness of AI tools depends on whether they conform to contextual norms, ensuring they are suitable for the mortgage lending domain.
- Accuracy: AI models should be reliable, error-free, and accessible across all demographic groups. Accuracy also entails the absence of bias, including representation, historical, omitted variable, selection, aggregation, and measurement bias.
- Legality: It is crucial to assess whether adopting AI applications leads to negative, disparate impacts on protected classes. Lenders must provide legitimate justifications and explore less discriminatory alternatives if disparate impacts occur.
- Expanded opportunity: AI solutions should not only enhance efficiency but significantly increase access to credit, particularly for underserved or "credit-invisible" households.
The authors argue that the S.C.A.L.E. framework offers a comprehensive approach to address concerns about bias and discrimination in AI applications within the mortgage industry. It is adaptable to the dynamic nature of AI models and aligns with ethical and socially responsible AI practices.
Automated Property Valuation Models (AVMs) aim to enhance efficiency and reduce biases in property appraisals. This study also introduces the notion of "algorithmic reparation," advocating for AI techniques designed to mitigate historical disadvantages rather than merely removing bias from existing algorithms.
Realizing Social Equity in Homeownership Using SCALE
The integration of AI into mortgage origination and servicing presents a potential transformative force in the housing industry, offering increased efficiency, cost reduction, and improved user experience. However, it carries both promises and pitfalls. One primary concern revolves around the data used to build AI models, particularly in assessing credit risk. The SCALE typology provides a framework to evaluate the suitability of various data types, guiding the industry, regulators, and policymakers in shaping regulations and privacy laws to prevent data misuse, such as using social media profiles for mortgage credit decisions.
Additionally, the use of AI models has raised significant issues regarding racial equity. Existing examples from other applications, like Microsoft's chatbot using racist language and Twitter's algorithm removing Black faces, demonstrate the potential for AI to perpetuate biases. In the mortgage context, AI could inadvertently create biased feedback loops, as it may incorporate historical disparities, leading to ongoing unequal treatment for marginalized communities.
The SCALE framework could be a cornerstone for stakeholders in the mortgage industry, helping them design, adapt, and monitor AI tools. This approach aligns with proposed legislation, spearheaded by Senators Wyden, Booker, and Representative Yvette Clarke, aiming to expand the Federal Trade Commission's enforcement powers in overseeing AI applications in housing and financial services. By addressing these issues and applying the SCALE framework, the industry can harness AI's potential while striving to uphold fair lending practices and social equity in homeownership.
Conclusion
In conclusion, the rapid integration of digitalization and AI in the mortgage market offers both promise and peril in addressing systemic barriers to mortgage credit. While the proposed Algorithmic Accountability Act and reevaluating existing regulations may hold potential for addressing bias and discrimination, the dynamic nature of AI model development and the resource-intensive nature of government audits raise concerns about practicality. Industry self-regulation may present a more viable alternative to regulatory oversight.
The goal should be to use AI to increase efficiency and eradicate discrimination and bias, all while expanding homeownership opportunities. To achieve this, AI models should be designed to account for racial effects from the outset and ensure fair and equitable treatment. Defining success metrics and monitoring tools for fairness will be crucial in advancing these goals, particularly for historically disadvantaged groups seeking homeownership.