In a significant move to regulate powerful artificial intelligence, New York state lawmakers have passed the RAISE Act, a first-of-its-kind bill aimed at preventing catastrophic consequences caused by advanced AI systems. If signed into law, this legislation would mark the first legally binding set of transparency standards for “frontier AI” models in the United States.
The RAISE Act—short for Responsible Artificial Intelligence in Societal Emergencies—targets AI models developed by major players like OpenAI, Google, and Anthropic. It sets clear guardrails to prevent scenarios where AI systems could contribute to mass-scale disasters, such as events resulting in over 100 deaths or more than $1 billion in damages.
A Win for the AI Safety Movement
The bill represents a major victory for advocates of AI safety, including world-renowned experts Geoffrey Hinton and Yoshua Bengio, who have long warned about the risks of unchecked AI development. With technology evolving rapidly, lawmakers behind the RAISE Act say the time to act is now.
“The window to put in place guardrails is rapidly shrinking,”
said Senator Andrew Gounardes, co-sponsor of the bill.
“The people that know AI best say these risks are incredibly likely—that’s alarming.”
The bill is now on Governor Kathy Hochul’s desk, awaiting her decision to either sign it into law, amend it, or veto it.
What the RAISE Act Would Require
If enacted, the RAISE Act would mandate that large AI labs:
- Publish detailed safety and security reports on their frontier models.
- Disclose incidents such as system malfunctions or unauthorized access.
- Operate with transparency if their models are used within New York.
The bill applies only to companies whose AI systems are trained using $100 million or more in computing power, ensuring that small startups and academic institutions aren’t affected.
If AI companies violate these rules, the New York Attorney General could impose civil penalties up to $30 million.
Designed to Balance Innovation and Safety
Unlike California’s SB 1047, a similar AI safety bill that was vetoed after backlash, the RAISE Act was specifically crafted not to stifle innovation. Lawmakers say the bill avoids requirements like mandatory “kill switches” or blanket liability for AI harms—both major points of contention in earlier legislation.
Still, the tech industry hasn’t embraced the bill. Venture capital firm Andreessen Horowitz called it “stupid” and warned it could hurt America’s position in global AI development. Others, like Anthropic co-founder Jack Clark, expressed concern that the bill might unintentionally burden smaller firms.
Senator Gounardes pushed back against these critiques, emphasizing that the bill only applies to the biggest AI labs and was carefully tailored to avoid sweeping restrictions.
Will AI Giants Pull Out of New York?
One lingering concern is whether companies like OpenAI and Google might choose to withhold their most advanced models from New York to avoid regulatory compliance—a scenario already unfolding in parts of Europe due to strict data laws.
But supporters of the RAISE Act believe that’s unlikely.
New York has the third-largest GDP in the U.S,” said Assemblymember Alex Bores, another co-sponsor of the bill. “It’s hard to imagine tech companies walking away from such a major market.
He added that the regulatory burden is “light” and should not hinder innovation.
A Defining Moment for AI Governance in the U.S.
With the rise of increasingly powerful AI systems, lawmakers around the world are struggling to balance innovation with public safety. If signed into law, the RAISE Act could serve as a national model for responsible AI regulation, ensuring that technology companies remain accountable as their creations grow in power and complexity.