
The State of AI Regulation
In this article
Business leaders seeking clarity on AI governance face a paradox: the loudest policy signals often obscure the most consequential regulatory developments. While federal deregulation dominates headlines, states have enacted over 100 AI laws, international frameworks are taking force, and courts are reshaping the landscape through dozens of cases reaching judgment. Understanding these cross-currents is now a strategic imperative.
At New York Technology Innovation 2025, Matthew F. Ferraro offered an assessment that cuts against conventional wisdom. Ferraro, a partner at Crowell & Moring LLP who served as a policymaker at the Department of Homeland Security, has watched AI governance from both inside government and as outside counsel. His conclusion is unambiguous: the regulatory environment is not simplifying—it is fragmenting into multiple, often contradictory vectors that will define competitive advantage for years to come.
The Strategic Landscape
Ferraro framed the current moment with an image worth holding in mind. In August 1974, Philippe Petit strung a tightrope between the Twin Towers and walked a quarter mile above Manhattan for 45 minutes. What made the feat so difficult, Petit later explained, was not the height but the invisible forces playing on the wire—winds blowing side to side, up and down, even torquing the cable like a corkscrew.
"In so many ways, AI is now on the tightrope," Ferraro observed, "as we move between two towers—a world before AI was widely deployed to one where AI is deployed as ubiquitously as electricity or the internet. And like Petit, AI is buffeted by many different forces. They are federal, state, international, and judicial."
The forces are not merely multiple—they are moving in different directions simultaneously. The federal government is retreating from some forms of regulation while actively intervening in others. States have become the primary engine of AI lawmaking. International AI frameworks are entering implementation. And plaintiffs are testing novel legal theories in courtrooms across the country, with some cases producing multi-billion dollar settlements.
For executives, the implication is clear: a strategy premised on federal deregulation alone will prove inadequate. The regulatory environment demands attention across all four vectors.
Six Competing Philosophies of AI Governance
There is no settled approach to AI regulation. Different jurisdictions have adopted fundamentally different theories about what should be regulated and how. Ferraro identified six distinct philosophies now shaping governance globally—each with different implications for how companies must structure their AI programs.
The risk-based approach regulates AI differentially based on the harm it might cause. Email spam filtering requires minimal oversight; social credit scoring systems face outright prohibition. The logic is intuitive, but implementation requires detailed risk taxonomies and ongoing classification decisions.
The sector-specific approach, where U.S. states have taken the lead, applies different rules to different industries. Healthcare AI faces one set of requirements, employment AI another. This reflects the reality that AI applications carry vastly different risk profiles depending on context—but it creates compliance complexity for companies operating across sectors.
The compute-based approach represents a newer theory. Rather than regulating applications, it regulates the underlying models based on computational power used in training. California pioneered this with its Transparency and Frontier AI Act, which defines "frontier models" by training compute and imposes transparency and safety requirements that less powerful models do not face. A similar bill has passed the New York legislature and awaits Governor Hochul's signature. If this approach spreads, it will reshape how companies think about model development decisions.
The transparency approach takes a lighter touch: AI use is permitted, but users must know they are dealing with AI. This appears most often in laws requiring labels on synthetic imagery and in healthcare chatbot disclosure requirements. The underlying premise is that informed consent, rather than prohibition, should govern most AI interactions.
The principle-based approach, exemplified by frameworks from NIST and the OECD, establishes guiding values—AI should benefit humankind, AI should not be biased—without mandating specific activities. This provides flexibility but limited certainty.
Finally, sandboxing allows companies admitted to particular programs to test AI tools with reduced regulatory exposure. Texas has enacted sandbox provisions, and federal legislation is pending. The theory is that innovation requires room to experiment before rules can be properly calibrated.
No jurisdiction has adopted just one approach. Most are layering multiple philosophies, creating regulatory environments that require sophisticated navigation.
The Federal Paradox: Deregulation Meets Industrial Policy
The current federal administration has expressed a clear preference for innovation, adoption, and deregulation. "That is certainly true," Ferraro acknowledged. "But that does not tell the entire story. This is a drum that I beat with some regularity."
The push to remove "burdensome regulation" competes with other federal initiatives that represent significant government intervention: restrictions on federal procurement of certain AI systems; active promotion of exports and data center construction; export fees on semiconductors; federal equity stakes in semiconductor companies; and continued focus on cybersecurity risks.
America's AI Action Plan, released July 23, 2025, alongside three executive orders, illustrates the tension. The plan identifies approximately 100 federal policy actions across three pillars: accelerating AI innovation, building U.S. AI infrastructure, and establishing global leadership in AI diplomacy and security. It calls for examination of regulations that stifle AI and recommends that the Office of Management and Budget withhold discretionary funding from states with burdensome AI laws.
Yet the plan does not endorse a moratorium on state laws, does not assert federal preemption, and does not declare that training AI on copyrighted data constitutes fair use. "It is a to-do list, effectively, but the plan does not have the force of law," Ferraro noted.
The accompanying executive orders reveal competing priorities. The Infrastructure Executive Order accelerates federal permitting for data centers—an intervention to support AI development. Ferraro noted that a similar initiative existed in the prior administration. "So again, I think what we see here is something of a bipartisan consensus on how the government can help develop AI."
The Export Executive Order establishes a program where selected companies will receive technical, financial, and diplomatic resources from government partnerships—industrial policy by another name. By December 13, 2025, the government is requesting information to inform the program's structure.
A third executive order addresses AI procurement standards for the federal government, requiring AI systems to meet certain criteria. The Office of Management and Budget is to issue implementing guidelines. "From what I've heard, I think it's probably going to be not as prescriptive as it could be, in that it will allow for greater flexibility by AI companies, but again, we're not sure yet," Ferraro observed.
The Preemption Question
A draft executive order circulated in late November would attempt to limit state AI regulation through threatened lawsuits and funding withholding. Ferraro expressed significant skepticism about this approach.
"My interpretation is that that executive order would face significant implementation challenges and legal hurdles if it is signed. Effectively, the President cannot do by executive fiat what Congress fails to do, which would be to preempt state law through federal law by using the interstate commerce powers that Congress has. You might be able to try, so I guess we'll see, but I just expect this to face significant implementation challenges."
Even if signed, state laws would likely remain in effect during years of litigation. "So, I would say, don't shut off the webinar when I get to the state AI regulation, because all of those laws will remain in effect."
Enforcement Continues
Meanwhile, enforcement actions proceed under existing authorities—what Ferraro characterized as "new wine in old bottles." The FTC is investigating AI companion chatbots amid concerns about child safety. The House Oversight Panel has queried Hertz about an AI damage-detection tool that allegedly issued erroneous bills without verifying actual damage. The DOJ Antitrust Division has updated its compliance program guidance to require companies to address AI risks—guidance that has not been rescinded.
Novel interventions include a government fee on certain semiconductor chips sold to China and the federal government taking a stake in a semiconductor company. "These are all novel approaches, but again, forms of regulation," Ferraro observed.
The Take It Down Act, signed in May 2025 and effective May 2026, represents the first federal law limiting AI use harmful to individuals. It prohibits knowingly publishing non-consensual intimate imagery, including AI-generated "digital forgeries," and requires covered platforms to establish takedown processes. "That's a major regulatory burden on platforms, websites, and mobile applications, because they now have to establish these takedown procedures," Ferraro noted. The law received broad bipartisan support. "It is unfortunate fact that most of the synthetic imagery that is created, most of the AI-generated imagery, is explicit. And oftentimes, if not most of the time, that is of individuals who do not consent to those depictions. So this is serious dignity harm for these individuals, and now it would provide federal remedy."
States as Regulatory Laboratories
The real engine of AI lawmaking operates at the state level. Over 100 state laws have been enacted, with more than 1,000 bills introduced. For companies operating nationally, this creates a compliance matrix of considerable complexity. AI laws at the state level fall into three general camps: comprehensive AI laws that are largely risk-based; frontier model safety laws; and sector-specific laws.
Comprehensive Frameworks
Four states have established comprehensive regimes: Utah, Texas, Colorado, and California.
Utah's law is currently effective, impacting high-risk AI interactions, requiring disclosures, enforced by the Division of Consumer Protection, and establishing the Office of AI Policy.
Texas, effective January 2026, prohibits AI promoting self-harm or certain illegal activities, requires disclosure of AI in healthcare contexts and when used by Texas agencies, and restricts agency use of biometrics. Critically, it empowers the attorney general to investigate any AI system deployed in Texas—including against out-of-state developers. "You could be a California company that develops an AI system, and then if somebody else deploys it in Texas and there's a complaint that the system violates the Texas law, under the law, the Texas Attorney General could investigate both the local deployer, but also the developer in California."
Colorado's framework, pushed back to June 2026, restricts algorithmic discrimination, requires consumer notice of AI use, imposes documentation obligations for high-risk applications, and empowers the Colorado AG to bring unfair or deceptive trade practice actions.
California, while lacking a broad AI law per se, has automated decision-making technology regulations under the CCPA requiring businesses to inform workers when AI affects employment decisions, requiring risk assessments, and offering consumers opt-out rights.
California's Transparency and Frontier AI Act, passed after Governor Newsom vetoed a more aggressive version in 2024, requires large developers of frontier models to publish safety frameworks incorporating widely accepted standards and explaining capacity to mitigate "catastrophic risks." It establishes whistleblower protections and promotes public AI research infrastructure called CalCompute. "I think it's a very aggressive law, and it might be a trailblazing law, because as I mentioned, New York might very well adopt one of its own."
Sector-Specific Regulation
Beyond comprehensive frameworks, states are layering sector-specific requirements that reflect particular concerns about AI deployment.
Labor and employment faces intense scrutiny. New York requires annual independent audits of AI hiring tools, disclosure of results to candidates, and the opportunity to decline AI-based assessment. Illinois, effective January 2026, prohibits discriminatory AI use in employment. California's Civil Rights Department has issued regulations clarifying that AI assessments discriminating on protected characteristics violate state law. "I think this is an area that you can expect more activity, because, quite candidly, I think lawmakers and individuals are leery about the idea of a robot deciding if they get a job or not, or deciding if they're fired from a job."
Chatbot regulation has emerged as a distinct category, driven by what Ferraro called "really wrenching stories of individuals and young people who have engaged in self-harm, or even committed suicide, after speaking with or engaging with chatbots for extended periods of time." Utah requires disclosure for mental health chatbots. California mandates disclosure and reporting for particularly lifelike chatbots and provides a private right of action. "I think you have to watch this space, because I expect many more states are going to pass laws along these lines."
Healthcare AI faces restrictions reflecting an asymmetry in public sentiment. "I think they don't want their healthcare being denied by an AI. I think they're happy if their healthcare procedures are approved by an AI, but they don't want one that's denied by an AI." Arizona prohibits AI from denying claims or prior authorizations—but not from approving them. Maryland requires insurers to ensure AI does not deny, delay, or modify services during utilization review. Nevada prohibits mental healthcare providers from using AI in direct patient care. Texas requires providers to disclose AI use and review all AI-generated information against medical standards.
Deepfake laws address disclosure requirements, non-consensual intimate imagery, fraud, digital likeness property rights in Tennessee, Louisiana, and New York, and deception in political campaigns. Some aggressive laws have been enjoined on First Amendment grounds; many remain in effect.
Attorney General Enforcement
State enforcement is not waiting for new statutes. New Jersey has launched a civil rights and technology initiative. Texas reached a settlement with an AI healthcare company over fraud allegations under existing law—before the new Texas law took effect. The Texas AG is currently investigating therapy chatbots from major AI companies. State attorneys general have sent letters to payment providers like PayPal and Visa challenging their facilitation of deepfake non-consensual imagery websites. Consumer groups have asked state attorneys general to investigate XAI's facilitation of the same.
The Courts Step In
Dozens of court cases are moving through the system, with some reaching judgment. The outcomes are reshaping the legal landscape for AI development.
Intellectual property litigation, the first wave following ChatGPT's 2022 launch, centers on whether training AI models on copyrighted material without consent constitutes fair use. Courts have reached divergent conclusions.
In Thomson Reuters v. Ross, the district court held training was not fair use, largely because the defendant's product competed with the original—a legal research tool. In Kadrey v. Meta, training was deemed fair use because plaintiffs failed to demonstrate market dilution. "Notice how there's a kind of synergistic view," Ferraro observed. "In Thomson Reuters, they said it did harm the market. In Kadrey, they basically said it wasn't properly alleged."
In litigation against Anthropic, training on lawfully purchased books was found fair use, but training on pirated databases was not—producing what Ferraro described as "the largest IP settlement of multi-billions of dollars that we've ever seen." The settlement negotiations are complete, but disbursement remains pending before a judge in the Northern District of California.
Employment discrimination litigation involves cases where plaintiffs allege AI systems violated anti-discrimination laws by making decisions based on protected characteristics.
Products liability litigation involves cases where minors committed suicide allegedly after extended AI engagement, with allegations that the AIs were defectively designed. In September 2025, Senators Durbin and Hawley introduced the AI LEAD Act, which would classify AI systems as products and create federal products liability causes of action—potentially transforming exposure for AI developers.
International Convergence and Competition
The EU AI Act is the first comprehensive AI framework worldwide, adopting a four-tiered risk classification: minimal, limited, high, and unacceptable. It entered force August 1, 2024, with implementation phases in February and August 2025.
In November 2025, proposals emerged to narrow the Act's scope, facilitate personal data use for training, simplify breach reporting, and extend compliance deadlines. This reflects a desire to ensure that European AI companies are not disadvantaged in competing with developers from other major markets.
China's "AI Plus" initiative promotes industry integration across six key areas by 2027 while requiring transparency labels on AI-generated content.
Korea's framework, effective January 2026, balances development promotion with trustworthy AI requirements—pursuing what Ferraro called "sort of the holy grail that everyone's going after. How do you get the good stuff of the innovation? How do you make sure it's trustworthy and it's not like science fiction and takes over?"
The AI Action Plan calls for the U.S. to lead diplomacy on international AI standards to shape the direction of international governance bodies.
Strategic Implications for Enterprise Leaders
Asked how organizations should manage this complexity, Ferraro emphasized that the answer is necessarily fact-specific. "Where are they physically? What jurisdictions apply? Where do they sell their tools? What kind of tools do they use? If you make an AI for email spam filtering, that's a real safe use. If you use it to give healthcare advice, that's a much more complicated use."
Unless Congress enacts genuine federal preemption—which Ferraro considers unlikely near-term—"this is just going to be the reality for the foreseeable future: this kind of patchwork."
Several imperatives emerge from this analysis. Compliance programs must track developments across all four vectors—federal, state, international, and judicial—not just the most visible one. AI-specific scenarios should be integrated into cybersecurity incident response planning. "I would note that I think that is a good idea for all companies," Ferraro observed. "If you're listening to this, and you don't integrate AI-specific incidents into your planning portfolio, you probably should, because I think that's where cybersecurity is going." The AI Action Plan recommends ensuring AI is developed with "secure by design" principles—considering security from inception rather than as an afterthought.
Companies operating in multiple states need systematic assessment of which laws apply to their AI deployments and development activities. Litigation trends warrant monitoring: the fair use question remains unsettled, products liability theories are advancing, and employment discrimination claims will test AI hiring tools.
Ferraro closed with a perspective on adaptation. His father's 1940s passport—a black-and-white photo stapled inside with no holograms—would never clear airport security today. Society has developed expectations for authenticity that would have seemed unnecessary decades ago. "We've come to expect certain indicia of truthfulness from our media."
"Maybe that's the good news. Maybe we are endlessly adaptable. When you see imagery that looks too good to be true, or has that glimmer that looks like it might be AI-created, I'm much more skeptical of it. I think we're going to move toward a world in which we're much more skeptical of media, as we should be. The trick is to be skeptical without being totally cynical or nihilistic and saying there's no such thing as truth."
For business leaders, the parallel holds. The regulatory environment for AI is complex, fragmented, and evolving rapidly. Success will belong to organizations that develop the capacity to navigate multiple jurisdictions, anticipate legal developments, and build governance structures flexible enough to adapt as the rules continue to change. The tightrope is real. The forces are multiple. The crossing has only begun.
Watch the Full Session
Access the complete recording, transcript, and additional resources.
Go to Session Page →Disclaimer: The information contained in this article is provided for general informational purposes only and does not constitute legal, regulatory, tax, investment, financial, or other professional advice, nor should it be relied upon as such. Readers should seek independent advice from qualified professionals in the relevant jurisdiction(s) before making any decisions or taking any actions based on the content of this article. While reasonable efforts are made to ensure accuracy and timeliness, 1BusinessWorld makes no representations or warranties, express or implied, regarding the completeness, accuracy, reliability, or suitability of the information provided. To the fullest extent permitted by applicable law, neither 1BusinessWorld nor the author accepts any liability for any loss or damage arising from the use of, or reliance upon, this article. The views expressed are those of the author and do not necessarily reflect the views of 1BusinessWorld or its affiliates.










