OpenAI has joined the ranks of tech leaders and politicians who are pushing back against a contentious AI safety bill in California. The company contends that SB 1047, if passed, could hinder innovation, emphasizing that regulation of such magnitude should be handled at the federal level.
In a letter addressed to California State Senator Scott Wiener’s office, OpenAI voiced its concerns, warning that the bill could have far-reaching consequences for the United States' competitive edge and national security. The company expressed worry that SB 1047 could jeopardize California’s status as a global AI leader, potentially driving talent to seek better opportunities elsewhere.
Senator Wiener introduced the bill with the goal of establishing “common sense safety standards” for companies developing large-scale AI models, particularly those that surpass certain size and cost criteria. These standards would mandate the implementation of shut-down mechanisms, the exercise of “reasonable care” to prevent disastrous outcomes, and the submission of compliance statements to the California attorney general. Non-compliance could lead to lawsuits and civil penalties.
Lieutenant General John (Jack) Shanahan, a former US Air Force officer and the first director of the US Department of Defense’s Joint Artificial Intelligence Center (JAIC), supports the bill, asserting that it thoughtfully addresses the substantial risks AI poses to both civil society and national security, offering pragmatic solutions.
Similarly, Hon. Andrew C. Weber, who previously served as Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, shared concerns about national security. Weber highlighted the dangers posed by the potential theft of advanced AI systems by adversaries and commended SB 1047 for establishing crucial cybersecurity measures.
Despite these endorsements, the bill has ignited strong opposition from tech giants, startups, and venture capitalists. Critics argue that it overreaches, potentially stifling innovation in an emerging field and driving businesses out of California. OpenAI echoes these concerns, with sources indicating that the company has put plans to expand its San Francisco offices on hold due to the uncertain regulatory environment.
Senator Wiener defended the bill, pointing out that OpenAI’s letter does not specifically criticize any of its provisions. He dismissed fears of a talent exodus as baseless, noting that the law would apply to any company doing business in California, regardless of their physical location. Wiener emphasized the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many companies have already adopted.
Critics, however, argue that requiring companies to submit model details to the government could stifle innovation. They also worry that the possibility of lawsuits could discourage smaller, open-source developers from starting new ventures. In response to these concerns, Senator Wiener recently revised the bill to remove criminal liability for non-compliance, protect smaller developers, and eliminate the proposed “Frontier Model Division.”
OpenAI continues to advocate for a comprehensive federal framework rather than state-level regulation, believing it is crucial for ensuring public safety while maintaining the United States' competitive edge against rivals like China. The company suggested that federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, are better suited to oversee AI risks.
While Senator Wiener acknowledges the ideal of congressional action, he remains skeptical about its feasibility. Drawing parallels with California’s data privacy law, which was passed in the absence of federal legislation, Wiener argued that California should not wait for Congress to act.
The California state assembly is expected to vote on SB 1047 later this month. If it passes, the bill will go to Governor Gavin Newsom for approval. Although Newsom's position on the legislation is not yet clear, he has publicly acknowledged the need to balance AI innovation with effective risk management.