Emerging Legal and Regulatory Frameworks Governing AI

Summary

  • Three regulatory approaches are emerging for AI: restrictionist (heavy regulation due to safety/security concerns), pro-innovation (minimal oversight to maximize development), and guardrails (targeted transparency requirements and sector-specific rules without stifling innovation).
  • The regulatory landscape remains fragmented: States like California and New York are passing varying AI laws, while President Trump’s Executive Order seeks to establish a single national standard, in the short run, companies must navigate multiple jurisdictions and quasi-regulatory frameworks including emerging industry standards and the results of litigation. 
  • Companies should build governance foundations now: Prepare transparency reports documenting data sources and AI usage, conduct risk assessments for security and safety issues, establish policies with clear guardrails, and create executive governance forums capable of making fast, informed decisions – adopting a “Fast GRC” approach to keep pace with rapid AI innovation.

Artificial intelligence (AI) has quickly emerged as a transformative technology, impacting nearly every aspect of society, from medicine to education to manufacturing. Governments are beginning to respond, and what they do to govern the development and deployment of novel forms of AI will be one of the major themes of the next several years. This blog will try to answer two questions: How can we best understand the laws and regulations that are likely to govern the development of AI? And given that regulatory environment, what should companies that are developing AI tools do?

Approaches to regulating AI

Broadly, there are three approaches to regulating AI:

Restrict

First is restrictionist. Concerned about the ability of malicious actors to use AI to harm society, as well as unintended consequences from rapid deployment of AI technologies, some advocates argue that we must heavily regulate the development of AI. Some even argue for a complete pause in the development of the most advanced models. Those holding this restrictionist position are concerned that malicious actors will use AI tools for malevolent purposes (for example, to build new horrific weapons), or will infect AI models with malicious code, which could enable cyber crimes on a massive scale.

There are also safety concerns – for example, unintended frontier model system behavior (Salesforce’s CEO Marc Benioff said at the recent Davos conference that AI must be regulated because “these AI models became suicide coaches”). Others are seeking to pause the building of data centers, which would heavily impact the development of the infrastructure necessary for AI. When the AI revolution first emerged in 2023, Sam Altman testified before Congress that AI has the potential to cause “significant harm to the world;” this and other statements by leaders in the field incentivized early steps toward heavy regulation.

Accelerate

The second approach is pro-innovation, or accelerationist. Advocates seek minimal government oversight so that technologists will have the maximum room to innovate. They argue that a light approach to regulation is wise because spurring innovation is critical at the beginning stages of developing new technology, particularly so AI. These advocates believe that AI will bring incredible benefits to the world and must be developed with all haste. They argue that at least during this initial window, innovation must take precedence over regulation.

Guardrails

Finally, some advocate for light regulation that will establish guardrails for the development of AI models. This approach promotes regulations that provide some parameters but without compromising innovation. Examples might include regulations that require companies that are developing AI tools to produce transparency reports describing what their sources of data are, or for companies that are deploying AI tools to disclose how they are using the models (e.g., if AI will be used in making decisions about who can receive mortgage loans or can rent apartments). Targeted regulations might also be applied to specific sectors that are considered particularly risky (e.g., use of AI in nuclear plants or in aviation).

California’s experience can help illustrate how regulation is evolving. In 2024, California’s legislature passed a bill that would have regulated AI strictly. The bill would have held companies developing AI models liable for catastrophic events caused by their models, such as cyberattacks on critical infrastructure or the creation of biological weapons. It also would have required those companies to employ third party auditors to conduct independent safety audits and then submit those reports to state regulators. It would have even included the so-called “Kill Switch” mandate – requiring developers to implement a mechanism to fully shut down a model if it began causing “critical harms.”

Gov. Newsom vetoed that bill. A second bill, more narrowly tailored, subsequently passed the California legislature in 2025 and Gov. Newsom signed it into law. The new bill requires developers of “foundation models” deemed to present a “critical risk” to publish reports on the details of their models, as well as intended uses for the models. They also must conduct risk assessments and publish the results of those risk assessments. 

Under the first bill, companies would have been required to meet certain standards and requirements and would have found themselves liable to government penalties and/or private lawsuits. Under the second bill, the focus is more on transparency – its design is to disclose to the public and the investing classes how companies approach safety and security issues. (Of course, transparency could lead to liability – if a company’s transparency reports are deceptive, they could cause the company to face investigation or litigation).

A Single National Standard?

State governments are considering or have passed dozens of laws that impact the development and deployment of AI. For example, New York recently passed a new law that claims to mimic California’s law but is in fact more prescriptive. Texas has adopted a broad AI bill, as have other states. Advocates for an accelerationist approach claim that the burgeoning state laws are stifling innovation. President Trump has responded by issuing an Executive Order (Executive Order 14179) that seeks to preempt state laws, and could lead to the Department of Justice and other federal agencies taking actions against states that implement laws in restrictive ways. Much is still to be resolved regarding this Executive Order. 

In the short run, Congress will try to develop a single national standard while states continue to experiment with new laws. Other governments around the world are also passing laws that global companies must account for, such as the European Union’s AI Act. 

“Quasi-regulation” also needs to be considered. For one, industry-developed standards are likely to play a prominent role. A notable example is the National Association of Insurance Commissioners, which adopted a model bulletin governing the use of AI by insurers. Broader industry-led efforts could ultimately inform the development of meaningful regulations (for example, a new report by the non-profit AVERI advocates for “an ecosystem of private sector frontier AI auditors”). Standards for the development of AI systems might also be imposed by provisions in contracts.   

It is also important to note that federal agencies that regulate specific sectors have and will increasingly be active in this area (e.g., the National Highway Transportation Safety Administration’s investigation of Tesla). Finally, litigation (tort lawsuits) will also produce rulings that companies will need to consider as they develop AI.  

Like it or not, there will remain a multiplicity of approaches to governing the emergence of AI. 

Laying Strong Foundations

How should companies operate in such an environment? Take steps now that will lay a strong foundation for successful AI deployment in the future. A few wise steps now could spare extraordinary expenditures of time and resources in the future and will allow companies to pivot quickly if regulatory scrutiny increases. 

Those foundations should include: anticipating the need to prepare transparency reports; conducting initial risk assessments; writing policies to manage the development and deployment of AI; and establishing oversight forums that empower key executives to make quick, risk-based decisions.  

To prepare for future transparency reporting requirements, companies should answer some basic questions:

  • What are the sources of data that you are or will be using – where does the data come from?
  • What is the infrastructure upon which you will operate the AI model (this is a complex question, which includes not only the underlying network, compute, storage and operating system(s), but also application software as well as dependencies on other AI models and technology suppliers)?
  • What are the security elements in place regarding that infrastructure?
  • What testing have you done of the AI model to ensure the outputs will be safe for your users?
  • What preparedness measures are in place in case things go wrong?
  • How have you accounted for the security and safety issues in applications that may deploy your AI tool?
  • Finally, how are you using AI – what decisions is it helping you make?

Creating a document that contains the answers to these questions will be extremely valuable internally – these are questions that the company’s leadership and its employees are or will be asking. It will also be invaluable externally – to explain to stakeholders how AI is being used; and possibly, to explain to regulators or courts how the company leverages AI.

Concerning risk assessments, companies need to be able to manage both security and safety issues that the deployment of AI tools might generate. The risk assessment needs to be built around a clear view of the threat landscape – what are malicious actors doing or planning to do in this space? Policies and associated operational controls can then be written and implemented to mitigate the risks identified. Those policies and controls need to be tested, so companies have assurance of the practices in place.  

Finally, governance forums should be created to bring together senior executives to understand and manage risks and quickly resolve thorny issues. The senior executives need to be well informed (having a solid understanding of all the issues listed above) and have authority to make decisions.  

Traditionally, developing risk assessments or establishing governance can take months to prepare; those time frames simply will not work in the AI space. How can we speed up the work of lawyers and Governance, Risk and Compliance (GRC) professionals in the context of AI? Agility, speed, and creativity – these are the guiding principles for those developing AI models and tools and must also be for lawyers and GRC professionals who practice in the AI space. 

It is possible to operate in a “Fast GRC” environment. I know this from my experience practicing law and advising the GRC program within Meta.

For example, we developed templates to guide our work. We knew our leaders needed to see risk management decisions teed up in certain ways; we knew what the key factors were in their minds, and so we built templates that could be used time after time.

Moreover, we had the right people in place. “XFNs” – cross-functional groups – were set up so that key staff and decision makers were aligned. Then, we learned by trial and error how to assemble the critical information about each decision (what is this new product? how do they want to use it?) and then present that information to those who needed it quickly. Finally, we had leadership support – we had top cover. Our legal and GRC leaders understood the need for speed and were prepared to support us as we acted.

Sometimes we had to carry out this type of analysis on moving targets – the development and use of AI often evolves quite quickly. But we were able to identify potential risks and rewards equally as fast, enabling company executives to make consequential decisions in a matter of hours. 

A related way to approach these issues is to create a governance system that enables the business to “run ahead,” governed by safety and security guardrails that are established based on actual past practice. The business can then move unencumbered (perhaps leveraging self-attestations) within those guardrails. Under this approach, when the business attempts something new, outside of the existing guardrails, the GRC, legal and other teams can quickly perform the necessary assessments that will then result in modifying or updating the existing guardrails. 

While law and governance traditionally operate at a careful pace, the challenge of the speed of innovation can be met. In the AI age, we really have no other choice.

Dan Sutherland is a Senior Advisor to The Chertoff Group’s Cybersecurity business. He formerly led the cybersecurity legal team at Meta and served as the first Chief Counsel at CISA. The author recommends resources such as the Paladin Global Institute’s publication, The AI Tech Stack, and the Lawfare podcast, Scaling Laws, sponsored by Lawfare and the University of Texas School of Law’s AI Innovation & Law Program.

Our goal is to provide a solution tailored to your needs. Contact us today for a consultation. 

How can we help?

Fill out the information below. Provide as much detail and a team member will respond as soon as possible.