Governments Must Act Fast: AI Report Highlights Growing Dangers

 

As AI systems become more autonomous and powerful, the groundbreaking report highlights emerging risks—cyber threats, disinformation, and loss of control—urging policymakers to act swiftly.

Governments Must Act Fast: AI Report Highlights Growing DangersImage Credit: Lightspring / Shutterstock

The first international report on artificial intelligence safety, led by Université de Montréal computer science professor Yoshua Bengio, was released today. It promises to serve as a guide for policymakers worldwide.

The report, announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and following an Interim Report released in May 2024 at the AI Seoul Summit, consolidates leading international expertise on AI and its risks. The final version builds on new developments and scientific insights, aiming to guide discussions at the AI Action Summit in Paris in February 2025.

Bengio, founder and scientific director of the UdeM-affiliated Mila—Quebec AI Institute, led a team of 96 international experts in drafting the report, supported by the United Kingdom's Department for Science, Innovation and Technology.

The experts were drawn from 30 countries, including the U.N., the European Union, and the OECD. Their report will help inform discussions next month at the AI Action Summit in Paris, France, and serve as a global handbook on AI safety to help support policymakers.

Towards a common understanding

The most advanced AI systems can now write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on par with human Ph.D.-level experts on tests in biology, chemistry, and physics.

The AI Safety Report, published today, warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting to pursue a goal. Major AI companies are investing heavily in AI agents that can operate independently, raising new challenges for risk management and oversight.

As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.

The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.

According to the report, several areas require urgent research attention, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably. While some experts expect AI capabilities to grow at a steady pace, others warn that sudden breakthroughs could accelerate progress unpredictably, posing new risks.

Three distinct categories of AI risks are identified:

  • Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, AI-driven disinformation campaigns, and even the development of biological weapons.

  • System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems. Although experts disagree on the likelihood of loss of control, the report notes that some researchers believe it could become a serious risk within the next several years.

  • Systemic risks: these stem from the widespread adoption of AI and include workforce disruption, privacy concerns, market concentration risks, and environmental impacts.

The report emphasizes the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop rapidly.

While mitigating the risks of general-purpose AI still presents many challenges, the report highlights promising areas for future research and concludes that progress can be made.

Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future.

"The capabilities of general-purpose AI have increased rapidly in recent years and months," said Bengio. "While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide."

"This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks, and possible mitigations."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI Risks Escalate as Advanced Models Gain Power and Autonomy