In a study published in the journal AISeL, researchers conducted a systematic literature review categorizing critiques and challenges of the proposed European Artificial Intelligence Act (AIA). The AIA aims to regulate the development and deployment of AI technology across sectors considering potential harms.
This pioneering governance attempt necessitates analysis as AI becomes deeply embedded. The interdisciplinary Information Systems (IS) field is increasingly engaging with AI and its societal dimensions. Hence, reviewing identified AIA issues guides impactful IS regulation research directions while policy processes remain open.
The Expanding AI Governance
As systems that independently learn and decide, AI technologies like machine learning algorithms raise legitimate concerns about accountability, transparency, and perpetuating unintended biases. Consequently, appropriate regulatory frameworks are essential for governing AI deployment aligned with societal values and preventing detrimental outcomes.
The European Commission proposed the cross-sectoral AIA in 2021 as a pioneering governance attempt to cover varied risks. The regulation classifies AI systems into four categories based on the level of potential individual or societal harm. Corresponding obligations are outlined for developers and professional institutional users of high-risk categories regarding transparency, testing, data quality, documentation, and human oversight. The AIA remains debated amongst European policymaking institutions, and likely adoption is expected by 2024.
The IS field’s growing attention to societal AI dimensions, including ethics and responsibility, highlights AIA deliberations as invaluable for research guiding responsible innovation amidst rapid advancement. Analyzing regulatory critiques and challenges is therefore crucial before policy formalization.
Previous IS Regulation
IS literature substantially examines legal regulation, but most empirical investigations concentrate on stages following policy adoption. For instance, numerous studies analyze enforcement issues like organizational compliance, while others evaluate regulatory outcomes and industry impacts. In contrast, pre-adoption policy formulation processes like stakeholder negotiations receive relatively scarce focus, although significantly shaping subsequent technology development and use.
Additionally, earlier inquiries rarely combine emerging technologies like AI with a still-evolving regulation, as uniquely evident with the AIA currently undergoing European policymaking. This agenda-setting and decision-making period allows analysis guiding appropriate governance. Furthermore, while IS scholarship increasingly prioritizes societal and ethical issues like AI implications, engaging with multi-level policy dimensions around technology regulation still needs to be explored.
Ambiguity and Standard-Setting
As the broad scope of AIA spans contexts, target actors, and AI techniques, some terminology ambiguity seems inevitable. However, excessive uncertainty risks impeding legal clarity required for compliance and enforcement. One institutional opportunity for decreasing ambiguity encompasses harmonized standards to translate abstract requirements into specific technical expectations. Hence, analyzing standardizing intermediaries like accreditation agencies in facilitating regulatory compliance even amidst ambiguity offers crucial insights. Additional aspects entail examining technical obstacles in legal interpretation and how ambiguities get exploited differently by developers versus industrial adopters.
Multi-Level Policy
The AIA involves complex consultations amongst European policy formulators and country-specific processes negotiating ideal governance needs. Discourses arise in public comments warranting closer discourse analyses. Comparing sequential AIA versions allows for examining contested language and shifting priorities of involved regulatory, industry, and civil society stakeholders. As rapid AI progress redefines policy problems, tracking formulation agendas over iterative cycles signals rising issues earlier for IS research attention. This expanded purview crucially captures technology-related regulatory intricacies beyond linear adoption studies.
Legal governance intrinsically encompasses interlinked agenda-setting, policy adoption, execution, and evaluation stages. However, prior isolated examinations obstruct inter-stage knowledge flows regarding effective regulation crafting and industry preparedness. Therefore, integrated longitudinal tracking would examine organizational responses and research discourse adaptations at multiple phases. For instance, studying transitional firm adjustments and comparing early versus emergent AI applications enables targeted support and responsiveness amidst unpredictability. Such temporally attentive, holistic regulatory understanding informs IS scholarship, guiding development and deployment.
Future Outlook
The review identified regulation challenges centered on problematic policy premises, narrow application scope omitting key sectors, and terminology ambiguity enabling conflicting legal interpretations. Additional significant gaps highlighted were inadequate procedural guidance, impractical technical requirements, and conflicts with existing laws. Realizing the AIA’s intended objectives would require addressing these identified flaws in the formulation.
Furthermore, significant compliance and enforcement difficulties were recognized that could obstruct effective execution. These encompassed a need for more standards for precise conformity assessments, persisting legal uncertainty regarding opaque requirements, unrealistic expectations for training data veracity and privacy protections, and convoluted lines of accountability determination. Tackling these barriers remains imperative for viable AIA adoption.
Finally, unintended adverse impacts warranting redress were highlighted, including hampering much-needed AI innovation through excessive regulatory burdens, disproportionately affecting smaller technology providers lacking ample resources, and enabling imbalanced provider influence if self-declared transparency suffices. Moreover, inadequate empowerment mechanisms for upholding consumer rights and neglecting sustainability considerations demand urgent incorporation.
Analyzing these varied critiques ultimately signals vital directions for IS research during the ongoing policy deliberation period. This timely guidance can help improve formulations for responsibly balancing AI progress and governance. Bridging identified gaps between AIA aspirations and practical realities remains essential to actualizing intended benefits equitably while comprehensively furthering AI regulation.
As potentially transformational technologies like AI rapidly progress, urgent IS engagement with real-time regulatory developments remains vital beyond observational assessments. Actively tracking and informing policymaking processes allows for shaping appropriate governance for managing risks and channeling progress equitably. Mandating legal compliance sans enabling ecosystem support predominantly breeds unintended reactive consequences, as warnings from this AIA review suggest. Therefore, highlighting regulatory gaps for research and industry interventions before institutional formalization is essential to guide inevitable AI infusion responsibly.