Examining the Split the AI Bill Caused in Silicon Valley
California legislators are set to vote on a bill that could fundamentally reshape the landscape of AI development in the heart of the area developing it.
The Senate Bill 1047 is poised to become the first legislation of its kind in the US. But its precedence is not the only thing garnering attention.
Silicon Valley, the epicentre of American AI innovation, is in a spin over the bill's technical requirements, arguing both for and against the bill.
Key Provisions of SB 1047
Much like the EU’s AI Act, which was Europe’s first comprehensive AI legislation, this bill aims to mandate safety for AI use and development, requiring testing for large-scale AI models.
Should it pass, AI models with training costs exceeding US$100m will require developers to implement safety measures, including:
- A "full shutdown" capability for potentially unsafe models
- Technical plans to address safety risks
- Annual third-party audits for compliance
- Documentation of compliance and safety incident reporting
- Potential civil penalties for violations
The lawmaker who authored the bill, Senator Wiener, defends the provisions as a "reasonable, light-touch" approach that won't impede innovation but will help address the risks associated with powerful AI technologies.
AI companies could face fines if their technology causes “critical harm.”, with up to US$50,000 for first violations and an additional US$100,000 for subsequent violations.
Split reaction in Silicon Valley
Although backlash to the bill was to be expected, what came as a surprise was the divide in opinion from industry leaders over it.
X owner and leader of Gen AI system Grok Elon Musk came out in support of the bill.
“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote on X on Monday afternoon. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.”
Anthropic CEO Dario Amodei has also rallied in support of the bill, saying it is necessary to protect the public and increase transparency in the industry.
Yet, Google Meta, Microsoft and OpenAI are voicing opposition, with the latter suggesting that companies might leave California if the bill passes.
Interestingly, however, OpenAI, while opposing SB 1047, has thrown its support behind another California bill, AB 3211. This legislation would require tech companies to label AI-generated content, addressing concerns about deepfakes and misinformation, especially in the context of elections.
AI startup Notion’s co-founder Simon Last, and renowned AI researchers Yoshua Bengio and Geoffrey Hinton have voiced their approval. Bengio described the legislation as "a positive and reasonable step" towards safer AI development.
The reasons for and against
The arguments in Silicon Valley to and against are predicated on tempering risk and stifling innovation.
Opponents argue that putting hurdles in front of AI development will not only stop companies from using it, but companies from developing it - which could compromise the US’s world-leading status in the AI arena.
It's for this reason that eight members of US Congress urged Gov. Gavin Newsom to veto SB-1047, with Representative Nancy Pelosi dubbing the measure “well-intentioned but ill informed.”
OpenAI also argues that California implementing this Bill could potentially open the floodgates to a patchwork of state-level legislation, and that this should instead be left to the federal government.
But two former OpenAI researchers, Daniel Kokotajlo and William Saunders, said derided OpenAI’s decision to oppose it. "Sam Altman, our former boss, has repeatedly called for AI regulation," they wrote in a letter to Gov. Newsom last week. "Now, when actual regulation is on the table, he opposes it."
Important letter by OpenAI whistleblowers Daniel Kokotajlo and William Saunders:
— ControlAI (@ai_ctrl) August 23, 2024
They highlight the extreme dangers of AI development, and the inconsistency of Sam Altman and OpenAI in publicly calling for regulation, but opposing it when it's on the table.
Here is the full… https://t.co/reILUDOP9q pic.twitter.com/sabclXsrwi
Yet, supporters argue that without such measures, unchecked AI development could pose existential threats, including risks to critical infrastructure and potential misuse in creating weapons of mass destruction.
Anthropic's Dario, who initial opposed the bill, came round in support following changes to the legislation saying changes and said the bill it “substantially improved, to the point where we believe its benefits likely outweigh its costs.”
The amended bill removed the creation of a "Frontier Model Division" to police frontier models. It also dropped criminal perjury for lying about the models and will instead rely on existing laws.
The future of AI development
As the August 31 deadline for passing the bill approaches, all eyes are on California. If approved, SB 1047, the AI regulation targeting the heart of the US’s area of AI innovation stands to influence global approaches to AI governance.
The unexpected support from figures like Elon Musk, coupled with the nuanced positions of companies like Anthropic, suggests that the AI industry is grappling with the complex balance between innovation and safety.
Yet, whether the effects turn out to be positive or negative, slowing innovation and creating a patchwork of regulations for companies operating from state to state, remains to be seen.
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand