Regulations in the GenAI Era – What Enterprises Need to Know

By
June 10, 2025
Lady Justice holding a scale talking regulations in the era of gen AI and what relevant to enterprises in this space

Future-proof compliance before it’s mandated.

Learn how.

The Regulatory Wave Is Building – Is Your GenAI Strategy Ready?

Generative AI (GenAI) is rapidly reshaping how enterprises engage with customers, automate operations, and unlock new revenue streams. At the same time, regulators worldwide are moving just as quickly to be sure these powerful systems are built and used responsibly.

Over the past year alone, a patchwork of new laws and guidance has emerged, shifting accountability from model developers and onto the companies that deploy those models in customer-facing or other high-impact settings. 

In other words, businesses deploying GenAI are not just responsible for what goes into their systems. They are now accountable for what comes out.

Several legislative efforts are leading the way:

  • European Union: The EU AI Act (2024), the world’s first comprehensive AI statute, imposes strict safety, transparency, and risk-mitigation duties on any company serving the EU’s 27 member states.
  • United States: Federal executive actions, state-level legislation, and active enforcement by government agencies collectively signal the direction U.S. AI regulation is heading. These are complemented by the NIST Generative AI Profile (2024), a voluntary yet increasingly influential framework that many foundation model providers and enterprise deployers already treat as a gold standard. While not legally binding, the profile clearly reflects the principles and controls that future regulations are likely to formalize.
  • Rest of the world: From the UK’s Online Safety Act to sector-specific rules in Canada, Japan, and Singapore, even where the rules are still voluntary, providing useful reference points for any enterprise deploying GenAI at scale. Lawmakers are converging on one message: GenAI must be safe, secure, and accountable.

This rapid momentum creates a narrow window of opportunity. AI Deployers still in the design or early-deployment phase can bake safety and governance into their GenAI stack now, avoiding costly retrofits later, preserving agility, and earning stakeholder trust before enforcement tightens.

In the sections that follow, we unpack the key regulations shaping GenAI, explain what they mean for enterprise deployers, and provide a practical playbook for staying ahead of the curve.

 

European Focus: The EU AI Act (2024)

The EU AI Act, which came into force in August 2024, represents the world’s first comprehensive, risk-based rule-set for AI. While the Act is already in effect, most of its provisions will become applicable in August 2026, providing organizations still piloting or scaling GenAI a two-year runway to embed safety-by-design principles into their GenAI stack and achieve compliance. 

The AI Act introduces a risk-based approach to AI regulation, categorizing systems based on their potential impact. Notably, any AI system that can “Impact on health, safety or fundamental rights” is considered “high-risk,” encompassing most public-facing applications. Specific use cases identified as high-risk include those in education, employment (HR), healthcare, and law enforcement.

Scope and shared responsibility

  • Extraterritorial reach: AI systems made available to, or used by, users in the EU fall within scope, regardless of where the provider or deployer is established.
  • Dual accountability: Obligations apply to model providers and to deployers that integrate or fine-tune those models (as wel ass Importers and Distributors), shairng the liability for downstream harm.

Key Obligations for GenAI Systems

For high-risk AI systems, both providers and deployers are required to:

  • Implement Risk Management Systems: Continuous identification, assessment, and mitigation of safety and security risks throughout the lifecycle.
  • Use High-Quality, Traceable Data: Training and tuning datasets must minimise bias and comply with European data-protection rules.
  • Ensure Accuracy and Robustness: Develop systems that are resilient and perform reliably.
  • Disclose Training Data Sources: Provide transparency regarding the data used to train AI models.
  • Prohibit Manipulative Uses: Avoid deploying AI systems that manipulate human behavior or exploit vulnerabilities.
  • Conduct Rigorous Adversarial Testing: Systematic pre-release and periodic stress-testing to uncover harmful or exploitable behaviours.

Penalties for Non-Compliance

The AI Act imposes a tiered penalty system for non-compliance:

  • Severe Infringements: Engaging in prohibited AI practices may result in fines up to €35 million or 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Other Violations: Breaches of obligations under the Act can incur fines up to €15 million or 3% of the total worldwide annual turnover, whichever is higher.
  • Supplying Incorrect Information: Providing incorrect, incomplete, or misleading information to authorities can lead to fines up to €7.5 million or 1% of the total worldwide annual turnover, whichever is higher.

United States: A Mosaic of Voluntary Standards and Enforcement

The United States does not yet have a single, comprehensive AI statute comparable to the EU AI Act. Instead, enterprises must navigate an evolving mix of federal guidance, agency enforcement, and state legislation that together define the practical compliance baseline for GenAI deployments.

NIST Generative AI Profile

The National Institute of Standards and Technology released the Artificial Intelligence Risk Management Framework: Generative AI Profile (NIST-AI-600-1) in July 2024. Developed in response to the 2023 White House EO 14110, which called for stronger AI safeguards (and has since been revoked), the profile offers a comprehensive, operational framework for managing risks unique to GenAI. While currently voluntary, the profile is already regarded by many as a de facto U.S. compliance baseline, and it is expected to influence future legislation.

The framework translates high-level risk principles into concrete, lifecycle-oriented controls, with particular emphasis on model safety, robustness, and responsible deployment. For enterprises deploying GenAI systems, several key areas are especially relevant:

  • Guardrails and Safety Controls: AI systems are encouraged to implement input/output filtering mechanisms, refusal logic for unsafe prompts, and post-processing systems to block or transform harmful content, designed to operate in real-time and aligned with documented policies.
  • Bias Mitigation and Safety-Focused Fine-Tuning: The profile highlights the importance of using bias-aware training and evaluation datasets, and refining model behavior through targeted tuning and reinforcement learning to prevent systemic unfairness.
  • Rigorous adversarial testing: The profile recommends red-teaming AI systems before and after deployment, using diverse testers to probe for inaccurate, discriminatory, or harmful outputs. 
  • Content Evaluation and Provenance Tracking: NIST advises organizations to adopt watermarking or tagging of synthetic content, log model input-output interactions, and maintain detection tools for verifying the origin of generated media, critical for preventing misinformation and manipulation.
  • Monitoring and Incident Response: Ongoing monitoring is considered essential. The profile outlines expectations for test, evaluation, verification, and validation (TEVV) processes, supported by telemetry and alerts to flag unexpected or harmful behavior in production environments.
  • Handling High-Risk Content: Special attention is given to mitigating outputs related to misinformation, non-consensual intimate imagery (NCII), and child sexual abuse material (CSAM), with clear responsibilities for pre-emptive controls and escalation workflows.

Together, these practices form a robust foundation for enterprise AI safety. Aligning with the NIST Generative AI Profile enables organizations to operationalize trust, reduce regulatory exposure, and ensure that GenAI systems remain safe, accountable, and fit for real-world use.

Prominent Watchdogs: FTC and DOJ Escalate Enforcement

The Federal Trade Commission (FTC) and the Department of Justice (DOJ) have intensified their focus on AI-related issues. In September 2024, the FTC launched “Operation AI Comply,” targeting deceptive AI claims and schemes that mislead consumers. Notably, the FTC fined a company claiming to create a “robot lawyer” $193,000 for misleading consumers and an AI writing tool accused of facilitating fake reviews. The DOJ has also intensified its focus on AI-related issues, seeking stiffer penalties for crimes involving or aided by AI.

The FTC, along with the DOJ and international partners, also issued a joint statement highlighting the need to monitor potential harms to consumers stemming from AI applications. These actions signal that even without a binding federal statute, misleading claims, unsafe outputs, or failure to control downstream harms can trigger substantial liability.

State-Level Momentum

At the state level, California and New York are leading the charge in AI regulation. In California, a suite of AI bills focused on deepfakes and child protection was signed in 2024, while the broader “AI Safety Bill” was vetoed over innovation concerns, demonstrating both legislative ambition and industry push-back.

In New York, a December 2024 law now requires state agencies to audit and publicly report any AI systems they use, with human-review mandates for high-impact decisions.

Other states are considering algorithmic impact-assessment or transparency bills, creating a patchwork that AI deployers operating nationwide must track closely.

Red Teaming: The Emerging Gold Standard Across Regulations

While regulatory frameworks vary across jurisdictions in their scope and specific obligations, one practice is consistently endorsed across all major guidelines and legal regimes: red teaming.  Also known as adversarial testing or model evaluation, red teaming refers to systematically probing AI systems to identify vulnerabilities, biases, and potential misuse scenarios before and after deployment.

The EU AI Act explicitly requires providers of general-purpose and other high-risk models to “conduct and document adversarial testing” prior to release, and to continue monitoring and mitigating risks throughout the model’s lifecycle (Recital 60q, concerning general-purpose AI models ). In the US, the NIST Generative AI Profile outlines adversarial testing as a foundational component of responsible AI deployment, alongside controls for bias mitigation, robustness evaluation, and harmful content monitoring. 

Industry-leading model providers embraced the same approach. OpenAI, Google, and Amazon all include adversarial testing in their Responsible AI policies and model release practices. OpenAI has established a formal Red Teaming Network; Google provides internal adversarial testing protocols; and Amazon emphasizes stress testing as part of its AI development lifecycle.

Complementing regulatory and industry efforts, the Open Worldwide Application Security Project (OWASP) has published its own guide to red teaming, highlighting the importance of addressing both security and safety risks in AI systems before and after deployment.

Taken together, these legal, policy, and industry signals point to a clear consensus: red teaming is the most widely recognized and actionable standard for ensuring GenAI systems are safe, trustworthy, and compliant.


… perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing. In addition […] continuously assess and mitigate systemic risks, including for example by putting in place risk management policies, such as accountability and governance processes, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle and cooperating with relevant actors across the AI value chain.”

(the EU AI ACT-recital 60q concerning general-purpose AI models)

Future-Proof Compliance Checklist

As AI regulations rapidly take shape and enforcement mechanisms begin to gain traction, AI deployers have a rare window of opportunity. Instead of reacting under pressure, forward-looking organizations can act now to design safety and compliance into their GenAI systems from the start, setting the tone rather than chasing the standard.

Here’s a practical checklist for staying ahead of the regulatory curve and turning compliance into a strategic advantage:

  1. Perform Use Case Risk Classification

Start by assessing the risks specific to your GenAI application. Which use cases may pose legal or reputational challenges? Who are the vulnerable user groups? Understanding local laws and contextual nuances is essential. Tailor your safeguards to address these risks with precision, drawing on expert policy and safety intelligence when needed.

  1. Use Safety-Aware Training and Evaluation Data

Incorporate datasets specifically designed to expose bias, edge cases, and fairness risks. Evaluate your models across diverse inputs to uncover failure modes early. The most effective safety datasets are tailored to your application’s risk profile and user environment.

  1. Conduct Structured Model Evaluation (Red Teaming)

Adopt adversarial testing protocols that simulate both common user behavior and malicious attempts to manipulate the system. Running these evaluations before and after deployment helps expose vulnerabilities early, improving system robustness and resilience.

  1. Establish Guardrails and Abuse Mitigation Mechanisms

Deploy proactive controls that detect and block harmful model behaviors in real time. This can include prompt filters, output moderation, and behavioral detection layers. Well-designed guardrails are crucial for minimizing downstream liability and protecting end users.

  1. Implement Observability and Incident Response Infrastructure

Ensure you have visibility into how your GenAI system behaves in the wild. Monitoring tools, usage analytics, and structured escalation workflows will allow you to track issues as they emerge, and respond before they escalate into compliance failures.

  1. Audit and Document Everything

Maintain clear, detailed documentation across the entire AI lifecycle. From training data sources and evaluation metrics to post-deployment interventions, this audit trail will be essential in demonstrating regulatory alignment, safety diligence, and accountability.

 

By treating AI safety and security as a core design pillar, not just a checkbox, enterprises can move faster with greater confidence. Early investment in trustworthy systems reduces the cost of retroactive compliance, strengthens customer and partner trust, and positions your brand as a leader in responsible AI.

Table of Contents

Need help preparing for the next era of AI and internet safety regulation?

Contact ActiveFence’s GenAI experts today.