Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Login Login

Silicon Valley says California’s SB 1047 will lead to an AI disaster, even though the bill’s goal is to stop it.

Sb 1047 California Case Sb 1047 California Case

Science fiction movies are the only historical examples of AI systems harming humans or being used for cyberattacks. Nevertheless, certain legislators are advocating for the implementation of protective measures to prevent malicious individuals from realizing a dystopian future. A bill in California, referred to as SB 1047, aims to prevent real-world disasters caused by AI systems before they occur. This bill is currently on its way to a final vote in the state’s senate, scheduled for later in August.


Although this appears to be a goal that we can all support, SB 1047 has sparked anger among various players in Silicon Valley, including venture capitalists, big tech trade groups, researchers, and startup founders. While AI bills are currently gaining traction nationwide, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in California has sparked significant controversy. Here’s the reason why, and let’s see who’s making this claim.

Could you please provide information on the purpose and function of SB 1047?


SB 1047 aims to protect humanity by restricting the use of large AI models to prevent potential “critical harms.”

Advertisement


The bill provides instances of “critical harms,” such as an unethical individual utilizing an AI model to develop a weapon that leads to widespread casualties or directing it to carry out a cyber attack resulting in damages exceeding $500 million (to put it in perspective, the Crowdstrike outage is estimated to have caused over $5 billion in damages). Developers are now responsible for ensuring that adequate safety protocols are in place to avoid such outcomes.


Which models and companies are subject to these regulations?


SB 1047’s regulations would specifically target the most massive AI models, those with a price tag of at least $100 million and utilizing a staggering 10^26 FLOPS during training. It’s worth noting that OpenAI CEO Sam Altman mentioned that training GPT-4 came at a similar cost. You can adjust these thresholds as necessary.


Only a handful of companies have managed to create AI products that can truly meet those demands. However, it won’t be long before industry leaders like OpenAI, Google, and Microsoft join their ranks. We expect this trend to continue. According to recent statements by Mark Zuckerberg, the upcoming iteration of Meta’s Llama will demand significantly higher computational power, potentially falling under the jurisdiction of SB 1047.


Regarding open source models and their derivatives, the bill states that if another party spends $25 million on developing or fine-tuning a derivative model, they become responsible for it instead of the original developer.


In addition, the bill mandates the implementation of a safety protocol to effectively prevent any potential misuse of covered AI products. This includes the implementation of an “emergency stop” button that can quickly shut down the entire AI model. Developers should prioritize the creation of thorough testing procedures to effectively mitigate the risks associated with AI models. Additionally, it is essential for them to annually engage third-party auditors to evaluate and ensure the effectiveness of their AI safety practices.

By adhering to these protocols, one can expect a level of confidence in preventing critical harm. It is important to note that absolute certainty is unattainable.

Who would enforce it, and what steps would they take to guarantee compliance?


The Frontier Model Division (FMD), a newly established agency in California, would oversee the rules. Every new public AI model that satisfies the requirements of SB 1047 must go through individual certification and have a written safety protocol.


The governor and legislature of California would appoint five individuals to oversee the FMD. This board would include representatives from the AI industry, the open source community, and academia. The board will provide guidance to California’s attorney general regarding potential violations of SB 1047, as well as offer safety practice recommendations to AI model developers.


As a senior developer, it is crucial for the chief technology officer to annually submit a certification to the FMD evaluating the potential risks of the AI model, the effectiveness of its safety protocol, and providing a detailed description of the company’s compliance with SB 1047. In the event of an “AI safety incident,” the developer is required to promptly report it to the FMD within 72 hours of becoming aware of the incident, just like with breach notifications.


If a developer does not adhere to any of these provisions, SB 1047 grants California’s attorney general the authority to initiate a civil action against the developer. For a model with a training cost of $100 million, penalties can be as high as $10 million for the first violation and $30 million for subsequent violations. The penalty rate increases as the cost of AI models rises.

Finally, the bill incorporates safeguards for employees who attempt to reveal information regarding an unsafe AI model to the Attorney General of California, ensuring their protection as whistleblowers.


What are the arguments put forth by supporters?



According to Eltrys, California State Senator Scott Wiener, the author of the bill and representative of San Francisco, explains that SB 1047 aims to draw lessons from previous policy failures regarding social media and data privacy. Its purpose is to safeguard citizens before any potential harm occurs.


“We have a track record in the technology industry of being reactive rather than proactive when it comes to addressing issues,” Wiener remarked. Why don’t we take proactive measures instead of waiting for a negative outcome? Let’s be proactive about it.


Regardless of where a company trains a $100 million model, whether it’s in Texas or France, SB 1047 will still apply as long as the company conducts business in California. According to Wiener, Congress has shown a lack of legislative action in the technology field for the past 25 years. Therefore, he believes that California should take the lead and establish a precedent in this matter.


According to Wiener, he has had discussions with major research institutions, including OpenAI and Meta, regarding SB 1047.

Geoffrey Hinton and Yoshua Bengio, renowned AI researchers, have expressed their support for this bill. These two are part of a group within the AI community that is worried about the potential negative consequences of AI technology. Individuals in the research world have long been concerned about the potential negative impacts of AI. SB 1047 has the potential to enshrine some of their recommended precautions into law. In May 2023, the Center for AI Safety penned an open letter urging the global community to give equal importance to addressing the potential threat of AI-induced extinction, just as we do with pandemics or nuclear war.


“In the long run, this is beneficial for the industry in California and the US as a whole, as a significant safety incident could potentially hinder further progress,” stated Dan Hendrycks, the director of the Center for AI Safety, in an email to Eltrys.

Hendryck’s personal motivations have been scrutinized recently. In July, he announced the launch of his startup, Gray Swan, which focuses on developing tools that assist companies in evaluating the risks associated with their AI systems, as stated in a press release. After facing criticism that Hendrycks’s startup could benefit from the bill’s passage, possibly by becoming one of the auditors that SB 1047 mandates developers hire, he decided to sell his ownership share in Gray Swan.

“I divested in order to send a clear signal,” Hendrycks explained in an email to Eltrys. If the billionaire VC opposition to commonsense AI safety truly wants to demonstrate their pure motives, they should consider following suit.


What are the arguments put forth by those who disagree?



There is a growing number of Silicon Valley players who are expressing their opposition to SB 1047.


The “billionaire VC opposition” Hendrycks refers to is most likely A16Z, the venture capital firm Marc Andreessen and Ben Horowitz co-founded. It is worth noting that A16Z has been a vocal critic of SB 1047. In early August, the venture firm’s chief legal officer, Jaikumar Ramaswamy, sent a letter to Senator Wiener expressing concerns about the bill. The letter highlighted potential challenges for startups due to the bill’s arbitrary and shifting thresholds, which could have a negative impact on the AI ecosystem. We expect the cost of AI technology to rise as it continues to advance. Consequently, an increasing number of startups will surpass the $100 million mark and fall under the coverage of SB 1047. A16Z has already reported that several of their startups are already receiving substantial amounts for training models.

Sb 1047 A16Z Podcast Cover
Image Credits: The a16z Podcast


Fei-Fei Li, a well-known AI figure, recently expressed her concerns about SB 1047 in an article for Fortune. In her column, she highlighted the potential negative impact of the bill on the growing AI ecosystem. Li, a highly respected AI researcher from Stanford, has made significant contributions to the field. She recently founded World Labs, an AI startup that has quickly gained recognition and has an impressive billion-dollar valuation, in addition to her academic work. The company’s support from A16Z solidifies its potential for success.


She is part of a group of influential AI academics, including fellow Stanford researcher Andrew Ng, who strongly criticized the bill as “an assault on open source” during a speech at a Y Combinator event in July. Similar to open software, open-source models can easily alter and serve various potentially harmful purposes, thereby posing an increased risk to their creators.

In a post on X, Yann LeCun, Meta’s Chief AI Scientist, expressed his concern that SB 1047 would negatively impact research efforts. He believes that a small number of misguided think tanks propagate an unfounded fear of ‘existential risk’, which forms the foundation of the bill. The world widely recognizes Meta’s Llama LLM as a leading example of an open-source LLM.

Startups are also dissatisfied with the bill. Jeremy Nixon, the CEO of AI startup Omniscience and the founder of AGI House SF, a hub for AI startups in San Francisco, expresses concern over the potential impact of SB 1047 on his ecosystem. He asserts that it is more important to punish malicious actors for causing significant harm than to hold AI labs accountable for openly developing and distributing the technology.


“The bill seems to have fundamental confusion regarding the varying levels of hazardous capability among LLMs,” Nixon remarked. Based on my analysis, it is highly probable that all models possess hazardous capabilities as outlined in the bill.


However, Big Tech, the main target of the bill, is also feeling a sense of panic regarding SB 1047. The Chamber of Progress, a trade group representing major players in the tech industry such as Google, Apple, and Amazon, recently released an open letter expressing their opposition to the bill. According to them, SB 1047 not only limits free speech but also hinders technological advancement in California. Last year, the idea of federal AI regulation received support from Google CEO Sundar Pichai and other prominent tech executives.


Silicon Valley typically has reservations about California’s implementation of such extensive tech regulations. Big Tech played a similar game in 2019, when another state privacy bill, California’s Consumer Privacy Act, also posed a threat to the tech industry. In opposition to that bill, Silicon Valley actively lobbied against it. Prior to its implementation, Jeff Bezos, the founder of Amazon, along with 50 other executives, penned an open letter advocating for a federal privacy bill instead.


What comes next?


On August 15th, the California Senate’s Assembly floor will receive SB 1047 and any approved amendments. According to Wiener, that’s the crucial point where bills either succeed or fail in California’s Senate. Given the strong backing from lawmakers thus far, it is highly likely that it will be approved.


Anthropic has submitted several proposed amendments to SB 1047 in late July, which Wiener states that he and California’s Senate policy committees are currently reviewing. Anthropic, a leading developer, has expressed its willingness to collaborate with Wiener on SB 1047, despite reservations about the current version of the bill. Many viewed this as a victory for the bill.


Anthropic suggests several changes, such as eliminating the FMD, limiting the Attorney General’s ability to sue AI developers preemptively, and removing the whistleblower protections provision in SB 1047. Wiener expresses overall optimism regarding the amendments but emphasizes the importance of obtaining approval from multiple Senate policy committees before incorporating them into the bill.


The Senate will then forward SB 1047 to California Governor Gavin Newsom for his final decision on whether to sign it into law by the end of August. Wiener claims he has not discussed the bill with Newsom and is unaware of his position.

Implementation of this bill would not be immediate, as the FMD’s formation is scheduled for 2026. Furthermore, if the bill receives approval, it is likely to face legal challenges beforehand, possibly from the same organizations currently voicing their concerns.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Arborxr-Slogan

ArborXR gets $12 million to improve its platform for managing AR and VR devices.

Next Post
Eliseai Logo

EliseAI secures $75 million in funding for chatbots that assist property managers in interacting with tenants.

Advertisement