Rethinking AI Governance: A Bold Plan for Equity and Accountability

A new AI governance framework prioritizes equity from the start, ensuring fairness and accountability in AI development. By tackling bias and promoting inclusivity, this approach seeks to protect marginalized communities while fostering responsible AI innovation.

Determinants of Socially Responsible AI Governance. Image Credit: Aree_S / ShutterstockDeterminants of Socially Responsible AI Governance. Image Credit: Aree_S / Shutterstock

Approaches to regulating artificial intelligence (AI) vary internationally, from creation to deployment and use in practice. In an article published on Jan. 27 in the Duke Technology Law Review, Daryl Lim, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law, and co-hire of the Penn State Institute for Computational and Data Sciences (ICDS), has proposed an "equity by design" framework to better govern the technology and protect marginalized communities from potential harm.

According to Lim, responsibly governing AI is crucial to maximizing the benefits and minimizing the potential harms - which disproportionally impact underrepresented individuals - of these systems. Governance frameworks help align AI development with societal values and ethical standards within specific regions while also assisting with regulatory compliance and promoting standardization across the industry.

Lim, who is also a consultative member of the United Nations Secretary General's High-Level Advisory Body on Artificial Intelligence, also addressed this need and how socially responsible AI governance may impact marginalized communities in the published article. 

Lim spoke about AI governance and his proposed framework in the following Q&A. 

Q: What does socially responsible AI mean? Why is it important? 

Lim: Being socially responsible with AI means developing, deploying, and using AI technologies in ethical, transparent, and beneficial ways. This ensures that AI systems respect human rights, uphold fairness, and not perpetuate biases or discrimination. This responsibility extends to accountability, privacy protection, inclusivity, and environmental considerations. It's important because AI has a significant impact on individuals and communities. By prioritizing social responsibility, we can mitigate risks such as discrimination, biases, and privacy invasions, build public trust, and ensure that AI technologies can contribute positively to the world. By incorporating social responsibility into AI governance, we can foster innovation while protecting the rights and interests of all stakeholders. 

Q: How would you explain the "equity by design" approach to AI governance? 

Lim: Equity by design means embedding equity principles throughout the AI lifecycle in the context of justice and how AI affects marginalized communities. AI has the potential to improve access to justice, particularly for marginalized groups. Suppose someone who may not speak English is looking for assistance and has access to a smartphone with chatbot functionality. In that case, they can input questions in their native language and get generalized information that they need to get started. 

There are also risks,, such as perpetuating biases and increasing inequality, which I call the algorithmic divide. In this case, the algorithmic divide refers to disparities in access to AI technologies and education about these tools. This includes differences between individuals, organizations, or countries in their ability to develop, implement, and benefit from AI advancements. We also need to be aware of biases that can be introduced, even unintentionally, by the data these systems are trained with or by the people training the systems. 

Q: What is the goal of this approach to AI governance? 

Lim: The overarching goal of this work is to shift the focus from reactive to proactive governance by proposing an equity-centered approach that includes transparency and tailored regulation. The article seeks to address the structural biases in AI systems and the limitations of existing frameworks, advocating for a comprehensive strategy that balances innovation with robust safeguards. The research explores how AI can both improve access to justice and entrench biases. This approach aims to provide a roadmap for policymakers and legal scholars to navigate the complexities of AI while ensuring that technological advancements align with broader societal values of equity and the rule of law. 

Q: What are some solutions to suggest to further reach an equitable approach to AI? 

Lim: The solution, in part, lies in equity audits. How do we, by design, ensure that there are checks and balances before an algorithm is released with the people creating the system? People who pick the data may be biased, and that may entrench inequalities, whether the bias manifests itself through racial bias, gender bias, or geographical bias. A solution could be hiring a broad group of people who are aware of different biases and can call out unconscious biases or having third parties look at how systems are implemented and provide feedback to improve outcomes. 

The article also examines the normative impact on the rule of law, which in this case involves assessing whether our current legal frameworks adequately address these challenges or whether reforms are necessary to uphold the rule of law in the age of AI. Emerging technologies like AI can influence fundamental principles and values that underpin our legal system. These include considerations of fairness, justice, transparency, and accountability. AI technologies can challenge existing legal norms by introducing new complexities in decision-making processes, potentially affecting how laws are interpreted and applied. 

Q: What observations further demonstrate the importance of an equity-centered approach to AI governance? 

Lim: In September, the "Framework Convention on Artificial Intelligence" was signed by the United States and the European Union (EU). This AI treaty was a significant milestone in establishing a global framework to ensure that AI systems respect human rights, democracy, and the rule of law. The treaty specifies a risk-based approach, requiring more oversight of high-risk AI applications in sensitive sectors such as health care and criminal justice. The treaty also details how different areas - specifically the U.S., the EU, China, and Singapore - have different approaches to AI governance. The U.S. is more market-based; the EU is rights-based; China follows a command economy model, and Singapore follows a soft law model, which serves as a framework rather than enforceable regulatory obligations. The treaty emphasizes the importance of global collaboration in addressing challenges across AI governance approaches. My proposed framework embeds the principles of justice, equity, and inclusivity throughout AI's lifecycle, aligning with the treaty's overarching goals. While the equity-by-design framework does not focus on post-implementation protections, it emphasizes that AI should advance human rights for marginalized communities and that there should be more transparent and protective audits. 

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.