Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
As the legal landscape governing the online world undergoes dramatic shifts, the content policy teams of tech platforms must be able to adapt to new demands quickly. This interactive map provides teams with an overview of global laws, allowing them to make informed decisions and create policies to remain compliant.
As technology platforms expand globally, they are held increasingly liable by new regulatory requirements. While previously, online regulations were fairly standardized, with most countries following the US’s method of limited liability, this legal context is changing – and platforms must react.
Our series of interactive maps provides an overview of the global laws that apply to online engagements. In our first edition, we cover international laws related to the hosting of terrorist content. Using this map and our in-depth report, Trust & Safety teams can evaluate their practices and build policies that ensure their compliance is up to date and based on the most relevant and applicable laws.
The following interactive map will provide summaries of applicable laws around the world. Move your cursor around and click to access our insights. This map is accurate as of April 4, 2022, and will be updated throughout 2022.
For more detailed insights about the relevant laws, download our legislative summary.
ActiveFence’s blog, All Eyes on 2022, highlighted a significant trend of national governments passing diverse legislation concerning platform liability for user-generated content. While the next two years will carry substantial legal change, we have already witnessed a massive growth in legal and regulatory innovation governing technology platforms and user-generated content.
The era of limited platform liability seems to be ending. Systematic abuse from extremist terrorist groups that abuse platforms to radicalize individuals and spread propaganda drives this change. National governments have begun to set out expectations for platforms – some voluntary and others carrying legal penalties. As internet laws apply where content is accessed, platforms with a growing user base must be aware and comply with laws across the globe.
While most responses to terrorism have been at the national level, two multinational responses were published in 2019: the Christchurch Call to Action and the Declaration of Principles on Freedom of Expression in Africa.
Legislative frameworks range from ones with no legal requirement to act against violative content to those that require platforms to identify terrorist content on their servers. The majority of countries fall in the middle, with requirements to remove access to content on receipt of a court order.
Trust & Safety teams should be mindful of the most rigorous regimes they operate in when drafting platform policy.
In a constantly diversifying legal environment, one with ever-increasing liabilities, policy creators must equip themselves with in-depth, up-to-date knowledge about the laws that govern on-platform content at a national level. This map and its accompanying report will provide policy teams at global online platforms with the context needed to begin building robust, compliant policies, and avoid global liability.
Stay tuned for our next legislation map, exploring the global laws that govern the spread of hate speech online.
Prompt injection, memory attacks, and encoded exploits are just the start. Discover the most common GenAI attack vectors and how red teaming helps stop them.
The EU has formally sanctioned key players behind Russia’s coordinated disinformation ecosystem. These campaigns, long monitored by ActiveFence, reveal a complex strategy built on narrative laundering, infrastructure resilience, and long-term influence.
AI misuse isn’t hypothetical - it’s happening now. This blog introduces ActiveFence’s latest guide for operationalizing AI safety and security with six real-world strategies to move from principle to protection.