OpenAI exec says California’s AI safety bill might slow progress


In a new letter, OpenAI chief strategy officer Jason Kwon insists that AI regulations should be left to the federal government. As reported previously by Bloomberg, Kwon says that a new AI safety bill under consideration in California could slow progress and cause companies to leave the state.

A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards. As a result, we join other AI labs, developers, experts and members of California’s Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key concerns.

The letter is addressed to California State Senator Scott Wiener, who originally introduced SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

According to proponents like Wiener, it establishes standards ahead of the development of more powerful AI models, requires precautions like pre-deployment safety testing and other safeguards, adds whistleblower protections for employees of AI labs, gives California’s Attorney General power to take legal action if AI models cause harm, and calls for establishing a “public cloud computer cluster” called CalCompute.

In a response to the letter published Wednesday evening, Wiener points out that the proposed requirements apply to any company doing business in California, whether they are headquartered in the state or not, so the argument “makes no sense.” He also writes that OpenAI “…doesn’t criticize a single provision of the bill” and closes by saying, “SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.”

The bill is currently awaiting its final vote before going to Governor Gavin Newsom’s desk.

Here is OpenAI’s letter in full:



Source link