Yann LeCun, the chief AI scientist at Meta, recently criticized the supporters of California’s controversial AI safety bill, SB 1047, in a bold move that highlighted the deep divisions within the AI community regarding the future of regulation. This public rebuke came on the heels of Geoffrey Hinton, known as the “godfather of AI,” endorsing the legislation. The disagreement between these two pioneers underscores the complexity of regulating a rapidly evolving technology.
SB 1047, which has been passed by California’s legislature and now awaits Governor Gavin Newsom’s signature, aims to establish liability for developers of large-scale AI models that cause catastrophic harm due to inadequate safety measures. The bill specifically targets models with training costs exceeding $100 million that operate within California, the world’s fifth-largest economy. LeCun argued that many of the bill’s supporters have a distorted view of AI’s near-term capabilities, citing their inexperience and overestimation of progress in the field. This stance directly contrasts with Hinton’s endorsement of the bill, which emphasizes the potential risks posed by powerful AI models.
The debate surrounding SB 1047 has led to scrambled political alliances, with unexpected supporters and opponents emerging. Despite his previous criticism of the bill’s author, State Senator Scott Wiener, Elon Musk has voiced support for the legislation. On the other hand, Speaker Emerita Nancy Pelosi, San Francisco Mayor London Breed, and several major tech companies and venture capitalists have opposed the bill. The evolving nature of the legislation is evident in companies like Anthropic, which shifted its stance after amendments were made, acknowledging that the benefits of the bill likely outweigh its costs.
Critics of SB 1047 argue that it could stifle innovation and disadvantage smaller companies and open-source projects. Andrew Ng, the founder of DeepLearning.AI, believes that the bill’s focus on regulating a general-purpose technology rather than its applications is a fundamental mistake. However, proponents assert that the risks associated with unregulated AI development outweigh these concerns. By targeting models with budgets exceeding $100 million, the bill aims to primarily affect large companies capable of implementing robust safety measures.
As Governor Newsom deliberates on whether to sign SB 1047, the decision he makes could have far-reaching implications for AI development not only in California but across the United States. With the European Union already moving forward with its own AI Act, California’s stance could influence the federal approach to AI regulation in the U.S. The clash between LeCun and Hinton serves as a microcosm of the broader debate on AI safety and regulation, illustrating the challenges policymakers face in balancing safety concerns and technological progress.
As AI continues to advance rapidly, the outcome of the legislative battle in California will set a precedent for how societies navigate the complexities of powerful artificial intelligence systems. The tech industry, policymakers, and the public will all be monitoring Governor Newsom’s decision closely in the weeks ahead.
Leave a Reply