Government scientists and artificial intelligence experts from nine countries and the European Union will gather in San Francisco after the U.S. elections to coordinate on the safe development of AI technology and mitigate its potential dangers. President Joe Biden’s administration has announced a two-day international AI safety gathering scheduled for November 20 and 21. The meeting aims to continue the efforts initiated at an AI Safety Summit in the UK, where delegates committed to collaborating on managing the risks associated with advancements in AI.
U.S. Commerce Secretary Gina Raimondo emphasized the importance of this meeting as the first significant working session following the UK summit and a subsequent event in South Korea that led to the establishment of publicly supported safety institutes focused on advancing research and testing of AI technology. One pressing issue expected to be discussed is the increase in AI-generated fake content and the challenge of determining when an AI system becomes so capable or dangerous that it requires regulation.
Raimondo highlighted the need to establish standards among countries regarding the risks associated with synthetic content and the malicious use of AI by nefarious actors. The meeting seeks to address risks effectively to unlock the potential benefits that AI can offer. San Francisco, known for being a key hub in the current wave of generative AI technology, was chosen as the location for these technical collaboration sessions on safety measures as a precursor to a larger AI summit planned for February in Paris.
The gathering in San Francisco will be co-hosted by the U.S. Department of Commerce and the Department of State, drawing on the expertise of newly formed national AI safety institutes in several countries and regions, including the U.S., UK, Australia, Canada, France, Japan, Kenya, South Korea, Singapore, and the European Union. Notably absent from the participants list is China, but efforts are being made to engage scientists from other nations in discussions on AI safety.
Governments worldwide have made commitments to protect AI technology, with the EU taking the lead by enacting comprehensive AI legislation that imposes strict regulations on high-risk applications. President Biden signed an executive order on AI in October that mandates developers of powerful AI systems to share safety test results with the government and directed the Commerce Department to establish safety standards for AI tools before public release.
In a recent development, OpenAI, the creator of ChatGPT, announced its latest AI model, o1, which was shared with national AI safety institutes in the U.S. and UK before its release. The new model, capable of complex reasoning and producing detailed responses, poses medium risk in the weapons of mass destruction category. The Biden administration has been urging AI companies to conduct rigorous testing on their advanced models before deployment.
Raimondo noted the importance of transitioning from a voluntary system of AI testing to a more regulated approach, suggesting that congressional action might be necessary. While many tech companies support the notion of AI regulation, concerns linger about potential impact on innovation. In California, Gov. Gavin Newsom recently signed legislation targeting political deepfakes and is considering a bill aimed at regulating extremely powerful AI models that could pose significant risks if developed.