Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety

In a recent submission to the arXiv server*, researchers proposed that the United States (US) government should mandate the implementation of Know-Your-Customer (KYC) schemes by compute providers to address security and safety risks arising from advanced artificial intelligence (AI) models.

Study: Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety. Image credit: Generated using DALL.E.3
Study: Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety. Image credit: Generated using DALL.E.3

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as conclusive, guide clinical practice/health-related behavior, or treated as established information.

Background

Considering the emerging risks associated with the advancement of frontier AI models, a call for increased regulatory intervention by the US government has surfaced. These AI capabilities have the potential to bolster adversarial military capabilities and enable human rights violations.

To address these concerns, the US has introduced export controls limiting the transfer of specialized AI chips essential for the development of large AI models. However, vulnerabilities within these controls have become apparent. Entities are currently free to use digital channels, such as cloud computing, to access regulated chips and the processing capacity they are linked to. This gives rise to worries about hostile armies and non-state entities potentially using US technology against the US.

While a comprehensive ban on access to the cloud is not a viable solution due to its potential adverse effects on US technological leadership, addressing these risks from a security perspective is imperative. Simultaneously, broader concerns related to public safety and security have garnered the attention of both the government and industry. Experts from various fields have pointed out the misuse risks associated with AI, including the potential spread of biological weapon information, increase in misinformation and electoral interference.

US leaders in the AI sector have committed to voluntary guidelines. As the hub of the supply chain and the AI industry, the US holds a unique position to shape regulatory approaches that maintain its influence while mitigating risks effectively. The substantial demand for computing in developing these models has elevated cloud computing as a critical component of the supply chain, with the US leading the way. Alongside other potential regulatory measures, enhancing the scrutiny of AI could offer earlier detection of emerging risks and more tailored responses.

KYC for computer providers

Researchers recommend the implementation of a KYC framework for AI compute providers, particularly cloud service providers (CSPs), to enhance oversight of frontier AI model development. This concept, previously proposed by Microsoft and AI researchers, increases accountability and risk management. A KYC scheme, created in partnership with the AI industry, can identify notable improvements in AI capabilities, support government authority over AI legislation, and facilitate more granular and focused restrictions.

Updates to the Export Administration Regulations that limit above-threshold compute provision to businesses on the entity list may be added to this scheme. In addition to export limitations, a KYC program might encourage responsible AI development and serve as a springboard for national safety laws. To maintain privacy for customers and compute providers, it could leverage existing technical metrics. The current study suggests creating a KYC program for computer providers, taking cues from the established use of KYC in the finance industry. It recommends that the US government collaborate with the industry to:

  • Establish a dynamic threshold for the scheme, effectively capturing high-risk frontier model development while minimizing impact on non-frontier AI developers.
  • Set clear and technically feasible requirements for compute providers, including information gathering, fraud detection, record-keeping, and reporting of entities matching government-specified 'high-risk' profiles.
  • Increase the US Department of Commerce's capabilities as a government organization to co-design, implement, oversee, and enforce the plan.
  • Engage with international partners to foster alignment with the scheme, ensuring consistent international standards and long-term effectiveness.

The introduction of this KYC scheme for advanced AI cloud computing would lay the foundation for comprehending the threat, including any substantial access attempts by entities outside the border, and enable more targeted restrictions to prevent entities of concern from accessing resources through the cloud. Importantly, it would also enhance the government's ability to identify trends and emerging risks, guiding AI policymakers toward companies operating at the cutting edge for improved risk management.

The KYC obligations should apply at and beyond a threshold of advanced AI computing that captures the most critical AI risks while minimizing the regulatory burden on the industry. This threshold could be quantified in terms of floating-point operations per second (FLOPs) and set in consultation with AI experts and compute providers to capture frontier AI models. The threshold should be dynamic, subject to regular review, and responsive to various metrics that influence AI capability and society's resilience to AI risks.

The enforcement of this KYC scheme should involve a government unit in the Department of Commerce responsible for AI regulatory policy. Collaboration with other relevant stakeholders and researchers should be integral to the scheme's administration and updates. A flexible enforcement mechanism, including penalties for deliberate non-compliance, should be established.

Moreover, international engagement and cooperation with key partners should shape a globally effective KYC scheme for advanced AI cloud computing. The involvement of countries with significant data center infrastructure, such as European countries and Japan, is paramount. The establishment of an intergovernmental organization for AI compute controls can facilitate risk information sharing and the alignment of standards and best practices. Collaboration with cross-border companies that can advocate for a KYC regime can further strengthen this initiative.

Conclusion

In summary, in response to growing concerns about AI, a KYC scheme fills an export control void, empowering the US government with enhanced oversight of AI. While not comprehensive, it offers a flexible approach to managing risks from cutting-edge AI models and unwanted proliferation. Scaling this internationally could aid global AI governance. Collaborating with industry and experts is crucial for minimizing harm and maximizing the benefits of AI.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as conclusive, guide clinical practice/health-related behavior, or treated as established information.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, October 25). Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety. AZoAi. Retrieved on September 18, 2024 from https://www.azoai.com/news/20231025/Regulating-Advanced-AI-The-Role-of-KYC-Schemes-in-Ensuring-Safety.aspx.

  • MLA

    Lonka, Sampath. "Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety". AZoAi. 18 September 2024. <https://www.azoai.com/news/20231025/Regulating-Advanced-AI-The-Role-of-KYC-Schemes-in-Ensuring-Safety.aspx>.

  • Chicago

    Lonka, Sampath. "Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety". AZoAi. https://www.azoai.com/news/20231025/Regulating-Advanced-AI-The-Role-of-KYC-Schemes-in-Ensuring-Safety.aspx. (accessed September 18, 2024).

  • Harvard

    Lonka, Sampath. 2023. Regulating Advanced AI: The Role of KYC Schemes in Ensuring Safety. AZoAi, viewed 18 September 2024, https://www.azoai.com/news/20231025/Regulating-Advanced-AI-The-Role-of-KYC-Schemes-in-Ensuring-Safety.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Camera Traps With Continual Learning Boost Real-Time Wildlife Monitoring Accuracy