Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
Signed just this month, the California Age-Appropriate Design Code Act (โADCAโ) places responsibilities on businesses that provide a product or service likely to be accessed by children. The law will be adopted on July 1, 2024, placing new obligations that Trust & Safety teams should begin preparing for now.
ActiveFence spoke with Michal Brand – Gold, our General Counsel VP, to understand who the act will apply to, what the obligations are, and how companies can prepare.
The ADCA seeks to protect childrenโs data. Specifically, the act aims to mitigate the following risks:
The ADCA will apply to businesses that provide an online service, product, or feature likely to be accessed by children in California. These companies:
Platforms that the ACDA applies to must comply with the following requirements:
The CA Age-Appropriate Design Code can hold violators liable for a fine of up to $2,500 per affected child for each negligent violation and up to $7,500 per affected child for each intentional violation.ย ย ย
However, businesses that comply with the law will be given notice before any actions are initiated. The business will have 90 days to rectify any violations before penalties are given.ย ย
An essential piece of the ADCA is creating a working group whose purpose is to develop best practices for implementing the law, as well as identify services that are required to comply with it.ย
Additionally, the working group has been given access to leverage California’s Privacy Protection Agency (CPPA) which has years of experience developing data privacy policies.ย
Given that the law establishes a Working Group and previously established the CPPA, it seems likely that the ADCA will be enforced.
The California act is modeled after the UK’s Age Appropriate Design Code which took effect in September 2021. Given their similar requirements, many major tech companies have already implemented measures to meet the UKโs code and, therefore, Californiaโs requirements.
Major tech companies redesigned online platforms to comply with the Childrenโs Code. However, we havenโt yet seen enforcement or fines from the code.ย
Instead, the focus of the UKโs code authority, the ICO, has been to help companies find solutions. As such, the ICO issues design guidance, a self-assessment risk tool, and transparency best practices.ย ย
In the evolving legal landscape of online liability, preparedness is key for online platforms to stay compliant. To do so, teams must remain up to date on legislation worldwide that affects the internet. Check out our Trust & Safety Compliance Center and help ensure your platform is compliant.
AI red teaming is the new discipline every product team needs. Learn how to uncover vulnerabilities, embed safety into workflows, and build resilient AI systems.
Discover how emotional support chatbots enable eating disorders and overdose risks, and what AI teams can do to safeguard users.
Align AI safety policies with the OWASP Top Ten to prevent misuse, secure data, and protect your systems from emerging LLM threats.