SACRAMENTO, Calif. (AP) — California lawmakers voted to advance legislation Tuesday that would require artificial intelligence companies to test their systems and add safety measures to prevent them from being potentially manipulated to wipe out the state’s electric grid or help build chemical weapons — scenarios that experts say could be possible in the future as technology evolves at warp speed.
The first-of-its-kind bill aims to reduce risks created by AI. It is fiercely opposed by venture capital firms and tech companies, including Meta, the parent company of Facebook and Instagram, and Google. They say the regulations take aim at developers and instead should be focused on those who use and exploit the AI systems for harm.
Democratic state Sen. Scott Wiener, who authors the bill, said the proposal would provide reasonable safety standards by preventing “catastrophic harms” from extremely powerful AI models that may be created in the future.
The requirements would only apply to systems that cost more than $100 million in computing power to train. No current AI models have hit that threshold as of July.
Wiener slammed the bill opponents’ campaign at a legislative hearing Tuesday, saying it spread inaccurate information about his measure. His bill doesn’t create new criminal charges for AI developers whose models were exploited to create societal harm if they had tested their systems and taken steps to mitigate risks, Wiener said.
“This bill is not going to send any AI developers to prison,” Wiener said. “I would ask folks to stop making that claim.”
Under the bill, only the state attorney general could pursue legal actions in case of violations.
Democratic Gov. Gavin Newsom has touted California as an early AI adopter and regulator, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax guidance. At the same time, his administration is considering new rules against AI discrimination in hiring practices. He declined to comment on the bill but has warned that overregulation could put the state in a “perilous position.”
A growing coalition of tech companies argue the requirements would discourage companies from developing large AI systems or keeping their technology open-source.
“The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation,” Rob Sherman, Meta vice president and deputy chief privacy officer, wrote in a letter sent to lawmakers.
Opponents want to wait for more guidance from the federal government. Proponents of the bill said California cannot wait, citing hard lessons they learned not acting soon enough to reign in social media companies.
The proposal, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices.
State lawmakers were also considering Tuesday two ambitious measures to further protect Californians from potential harms from AI. One would fight automation discrimination when companies use AI models to screen job resumes and rental apartment applications. The other would prohibit social media companies from collecting and selling data of people under 18 years old without their or their guardians’ consent.
—
Rephrased content:
California legislators have passed a groundbreaking bill that would enforce regulations on artificial intelligence (AI) companies to test their systems and include safety measures to prevent potential misuse, such as triggering power grid failures or aiding in the production of chemical weapons. The bill, introduced by Democratic state Sen. Scott Wiener, aims to mitigate risks associated with the rapid advancement of AI technology. The proposed regulations have faced strong opposition from venture capital firms and tech giants like Meta (Facebook, Instagram) and Google, who argue that the focus should be on preventing misuse rather than targeting developers.
Wiener emphasized that the legislation seeks to establish safety standards to prevent severe consequences from the use of highly potent AI models in the future. The requirements of the bill would only be applicable to AI systems that exceed $100 million in computing power for training, a threshold which no current models have reached as of July. Wiener denounced the misinformation spread by opponents of the bill, clarifying that it does not introduce criminal charges for AI developers whose creations are exploited for harmful purposes if they have conducted testing and risk mitigation.
The bill stipulates that legal actions for violations can only be pursued by the state attorney general. Governor Gavin Newsom commended California’s leadership in AI regulation, highlighting potential uses of generative AI tools for addressing traffic congestion, enhancing road safety, and offering tax advice. While Newsom refrained from commenting on the bill, he cautioned against excessive regulations that could place the state in a precarious position. A coalition of tech companies has expressed concerns that the legislation might hinder the development of large AI systems and the availability of open-source technologies.
Proponents of the bill argue that waiting for federal guidance is not an option, drawing parallels to delayed actions against social media companies. The proposal, endorsed by notable AI experts, includes the establishment of a new state agency to oversee developers and promote best practices. Concurrently, lawmakers are deliberating on additional measures to safeguard Californians from AI-related risks, such as combating automation discrimination in hiring processes and restricting social media platforms from collecting and selling data of individuals under 18 without consent.
Copyright @2024 | USLive | Terms of Service | Privacy Policy | CA Notice of Collection | [privacy-do-not-sell-link]