In a recent submission to the arXiv server*, researchers proposed that the United States (US) government should mandate the implementation of Know-Your-Customer (KYC) schemes by compute providers to address security and safety risks arising from advanced artificial intelligence (AI) models.
*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Background
Considering the emerging risks associated with the advancement of frontier AI models, a call for increased regulatory intervention by the US government has surfaced. These AI capabilities have the potential to bolster adversarial military capabilities and enable human rights violations.
To address these concerns, the US has introduced export controls limiting the transfer of specialized AI chips essential for the development of large AI models. However, vulnerabilities within these controls have become apparent. Entities are currently free to use digital channels, such as cloud computing, to access regulated chips and the processing capacity they are linked to. This gives rise to worries about hostile armies and non-state entities potentially using US technology against the US.
While a comprehensive ban on access to the cloud is not a viable solution due to its potential adverse effects on US technological leadership, addressing these risks from a security perspective is imperative. Simultaneously, broader concerns related to public safety and security have garnered the attention of both the government and industry. Experts from various fields have pointed out the misuse risks associated with AI, including the potential spread of biological weapon information, increase in misinformation and electoral interference.
US leaders in the AI sector have committed to voluntary guidelines. As the hub of the supply chain and the AI industry, the US holds a unique position to shape regulatory approaches that maintain its influence while mitigating risks effectively. The substantial demand for computing in developing these models has elevated cloud computing as a critical component of the supply chain, with the US leading the way. Alongside other potential regulatory measures, enhancing the scrutiny of AI could offer earlier detection of emerging risks and more tailored responses.
KYC for computer providers
Researchers recommend the implementation of a KYC framework for AI compute providers, particularly cloud service providers (CSPs), to enhance oversight of frontier AI model development. This concept, previously proposed by Microsoft and AI researchers, increases accountability and risk management. A KYC scheme, created in partnership with the AI industry, can identify notable improvements in AI capabilities, support government authority over AI legislation, and facilitate more granular and focused restrictions.
Updates to the Export Administration Regulations that limit above-threshold compute provision to businesses on the entity list may be added to this scheme. In addition to export limitations, a KYC program might encourage responsible AI development and serve as a springboard for national safety laws. To maintain privacy for customers and compute providers, it could leverage existing technical metrics. The current study suggests creating a KYC program for computer providers, taking cues from the established use of KYC in the finance industry. It recommends that the US government collaborate with the industry to:
- Establish a dynamic threshold for the scheme, effectively capturing high-risk frontier model development while minimizing impact on non-frontier AI developers.
- Set clear and technically feasible requirements for compute providers, including information gathering, fraud detection, record-keeping, and reporting of entities matching government-specified 'high-risk' profiles.
- Increase the US Department of Commerce's capabilities as a government organization to co-design, implement, oversee, and enforce the plan.
- Engage with international partners to foster alignment with the scheme, ensuring consistent international standards and long-term effectiveness.
The introduction of this KYC scheme for advanced AI cloud computing would lay the foundation for comprehending the threat, including any substantial access attempts by entities outside the border, and enable more targeted restrictions to prevent entities of concern from accessing resources through the cloud. Importantly, it would also enhance the government's ability to identify trends and emerging risks, guiding AI policymakers toward companies operating at the cutting edge for improved risk management.
The KYC obligations should apply at and beyond a threshold of advanced AI computing that captures the most critical AI risks while minimizing the regulatory burden on the industry. This threshold could be quantified in terms of floating-point operations per second (FLOPs) and set in consultation with AI experts and compute providers to capture frontier AI models. The threshold should be dynamic, subject to regular review, and responsive to various metrics that influence AI capability and society's resilience to AI risks.
The enforcement of this KYC scheme should involve a government unit in the Department of Commerce responsible for AI regulatory policy. Collaboration with other relevant stakeholders and researchers should be integral to the scheme's administration and updates. A flexible enforcement mechanism, including penalties for deliberate non-compliance, should be established.
Moreover, international engagement and cooperation with key partners should shape a globally effective KYC scheme for advanced AI cloud computing. The involvement of countries with significant data center infrastructure, such as European countries and Japan, is paramount. The establishment of an intergovernmental organization for AI compute controls can facilitate risk information sharing and the alignment of standards and best practices. Collaboration with cross-border companies that can advocate for a KYC regime can further strengthen this initiative.
Conclusion
In summary, in response to growing concerns about AI, a KYC scheme fills an export control void, empowering the US government with enhanced oversight of AI. While not comprehensive, it offers a flexible approach to managing risks from cutting-edge AI models and unwanted proliferation. Scaling this internationally could aid global AI governance. Collaborating with industry and experts is crucial for minimizing harm and maximizing the benefits of AI.
*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.