LONDON — The emergence of advanced artificial intelligence (AI) systems has raised significant concerns about potentially severe risks, including massive job displacement, the facilitation of terrorism, and the capacity for unregulated behavior, as highlighted in an unprecedented international report released on Wednesday. This report, known as the International Scientific Report on the Safety of Advanced AI, is being unveiled in anticipation of a crucial AI summit scheduled for next month in Paris. It garners support from 30 nations, including both the United States and China, marking a rare collaborative effort between these two global powers amid ongoing competition for AI leadership. This collaboration follows a recent breakthrough from a Chinese startup, DeepSeek, which has developed an affordable chatbot, despite existing U.S. restrictions on advanced technology exports to China.
The report, produced by independent experts, aims to consolidate extensive research to inform policymakers as they seek to establish regulatory frameworks for this rapidly evolving technology. Leading this initiative, Yoshua Bengio, a renowned AI scientist, emphasized the significance of the challenges presented by advanced AI. The report draws attention to the remarkable progression of AI capabilities over the past few years, noting that while earlier AI systems struggled with basic tasks, contemporary systems are now proficient at writing computer code, generating lifelike images, and engaging in meaningful conversations.
Although many risks associated with AI, such as deepfake technology, fraud, and biased outputs, are already recognized, the report underscores that “as general-purpose AI continues to evolve, additional risks are increasingly becoming apparent,” with risk management strategies in nascent stages. Recent warnings about artificial intelligence also emerged from both the Vatican and the group linked to the Doomsday Clock, reinforcing the urgency of addressing these concerns.
The report specifically examines general-purpose AI, typified by versatile chatbots like OpenAI’s ChatGPT, and identifies three main categories of risk: malicious utilization, technical malfunctions, and widespread systemic threats. Bengio noted that among the 100 experts who contributed to the report, there is a lack of consensus regarding future implications of AI technology. One of the primary points of contention within the AI research community relates to the timeline for when AI may surpass human capabilities in various tasks and the consequences of such advancements.
“There are differing opinions on potential future scenarios,” Bengio remarked, stressing that no one possesses definitive foresight. Some projections may lead to beneficial outcomes, while others could evoke fear. It is essential for both policymakers and the public to recognize this uncertainty and adjust their perspectives accordingly.
Detailed research in the report indicates that AI technology facilitates illicit activities, such as the development of biological or chemical weapons, by providing detailed, step-by-step instructional models. However, the feasibility of effectively weaponizing and deploying such agents remains uncertain. Furthermore, the report anticipates that general-purpose AI will significantly reshape the labor market, likely resulting in workforce displacement. Some studies predict that AI could create new job opportunities, while others express concerns about declining wages and employment rates, although the long-term impact remains unpredictable.
The potential for AI systems to operate uncontrollably is another pressing issue, as they may actively undermine human oversight or lead to disengagement from human operators, according to the report’s findings. Compounding these risks is the limited understanding developers have of their own models, making risk management particularly challenging.
The report originates from an inaugural global summit on AI safety held by the UK in November 2023, where nations committed to collaborative efforts to mitigate potentially catastrophic risks. Following this, a subsequent meeting in South Korea resulted in pledges from AI companies to prioritize safety measures, along with support from world leaders to establish a network of public AI safety organizations.
Endorsed by the United Nations and the European Union, the report aims to maintain relevance amid governmental changes, such as the recent presidential transition in the U.S. The shift has the potential to alter approaches to AI safety, especially since President Trump has sought to reshape policies initially put in place by former President Biden. Nevertheless, his administration has refrained from disbanding the AI Safety Institute established under Biden, which is part of a burgeoning international network.
In February, world leaders, technology executives, and civil society representatives will reconvene at the Paris AI Action Summit, aiming to sign a collective declaration regarding AI development and commit to sustainable technology practices. Bengio emphasized that the report does not advocate for specific assessment methods or particular risks. Rather, it conveys the existing scientific literature on AI in an accessible manner for decision-makers. “We must enhance our understanding of the systems we are creating and the associated risks to make informed decisions moving forward,” he concluded.