Now: Efficiently moderate content and ensure DSA compliance Learn how
Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Every user deserves to be protected - and every Trust & Safety team deserves the right tools to handle abuse.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are
Platforms are trained to detect graphic CSAM, but its non-graphic counterpart often goes unnoticed, leaving users vulnerable.
A searchable interactive guide to the legislation of almost 70 countries that govern online disinformation.
Despite popular discourse, there are clear distinctions between censorship and content moderation.
ActiveFence reviews how human exploitation emerges and increases online as global sporting events take place.
Understand each Digital Services Act’s requirements and learn how a customized content moderation platform and intelligence solutions can help platforms comply.
ActiveFence shares our research into the ‘BoyLover’ pedophile community, providing cultural and organizational insights about this network.
Esoteric language, code words, and euphemisms: Child predators have adopted new tactics to evade detection by content moderators.
When it comes to the most influential users on a platform, applying content moderation policy can sometimes be a high-stakes situation.
Adding prebunking to existing content policy can help platforms get ahead of misinformation trends during election season, and at any time.
The Supreme Council for the Administration of the Media can block social media accounts and penalize journalists for publishing fake news or posts that incite public disorder. Social media accounts and blogs with more than 5,000 followers on online platforms will be treated as media outlets, which makes them subject to prosecution for publishing false news or incitement to break the law.
The establishment of websites must be conducted following receipt of a license from the Supreme Council.
Media Regulation Law and the Supreme Council for Media Regulation promulgated by Law No. 180 (2018)
Ethiopia holds internet intermediaries to be liable for hosted user-generated content. Platforms must remove disinformation within 24 hours of notification from the authorities. Social media account operators risk two to five-year prison terms for violence or public disturbance due to disinformation.
Hate Speech and Disinformation Prevention and Suppression Proclamation (2020)
Kenya requires internet service providers to require internet service providers to prevent disinformation in the form of political content. Social media platform administrators are required to monitor and remove related disinformation content within 24 hours.
Guidelines for Prevention of Dissemination of Undesirable Bulk Political SMS and Social Media Content via Electronic Communications Networks (2017)
Mali has criminalized the dissemination of false information shared “in bad faith and disturbs the public peace.” The state requires ISPs to install mechanisms to monitor illegal activities. Failure to inform authorities of illegal acts risk a fine of up to €3,000 (c.$3,080) and/or a prison sentence.
The state is allowed to throttle the internet during protests or elections.
The Press Regime and Press Offenses Law (2000); Suppression of Cybercrime (2019)
Niger criminalized the creation, dissemination, and making of false or defamatory information publicly available. This covers all communication from text to audio, image, and video shared on an information system. Platform operators may be required to intercept communications on request from relevant authorities concerning illegal online activity. Failure to comply risks prison sentences of between six months and three years, with a maximum fine of 5,000,000 CFA (c.$7,900).
Cybercrime Law (2020)
Nigeria has criminalized the publication of content that shares false information intending to cause “annoyance, inconvenience, danger, obstruction, insult, injury, criminal intimidation, enmity, hatred, ill will, or needless anxiety to another.” Failure to comply risks three years in prison, a fine, or both.
Cybercrimes Act (2015)
South Africa made it an offense to publish a statement through any medium to spread false information about COVID-19 or government measures to address the pandemic. The penalty is a fine or imprisonment for six months or both.
South Africa’s Board can request platforms to remove content that poses “severe harm, especially to children.”
Disaster Management Act 2002; section 11(5) (2020) ; FBP Online Regulation Policy (2016)
Platforms are required to fulfill the “duties of a licensee and producer,” and ensure that hosted content does not promote ethnic prejudice and violence and is not likely to create public insecurity or violence. Failure to comply risks fines of up to 10% of a platform’s gross annual revenue, and a suspension or revocation of a license.
International Covenant on the Elimination of All Forms of Racial Discrimination (1965); Uganda Communications Act (2013); The Computer Misuse Act (2011)
Zimbabwe has criminalized the publication of false news. The Minister of Information, Publicity, and Broadcasting Services has also created a “cyber-team” for social media monitoring.
The Enacted Cyber and Data Protection Act (December 2021)
Algeria has criminalized the spread of “false news” that harms national unity and does not distinguish between news reports, social media, and other media. Failure to comply risks prison terms of one and three years, with fines of 100,000 to 300,000 Algerian dinars ($2,160).
Law No. 20-06 Article 196 bis of Algeria’s Criminal Code
In Morocco, the government can filter and delete content deemed to “disrupt public order by intimidation, force, violence, fear, or terror” and close any publication “prejudicial to Islam, the monarchy, territorial integrity, or public order. Legal liability rests with anyone who helps the author publish their content, which includes internet platform operators.
Law to Combat Terror (2003); Press and Publications Code (2016)
While several local legal initiatives were enacted to prevent the spread of COVID-19 disinformation, these have all been repealed. Several future national legislations have been proposed.
The Internet Bill of Rights 2014 ensure liability indemnity for platform providers for UGC. However, during elections, the Superior Electoral Court can order platforms to remove electoral disinformation within 2 hours; repeated failure to comply risks the Superior Electoral Court ordering that the non-compliant platform is blocked from access within Brazil for 24 hours.
Marco Civil Law of the Internet (2014); Resolution No. 23,714 (October 2022)
Several bills have been submitted to the Chilean Chamber of Deputies. However, as of November 2022, there is no legislation on online disinformation.
The Press Law on Freedom of Opinion and Information and the Exercise of Journalism (2001)
Colombia has enshrined internet neutrality, where the only illegal online content is child sexual abuse material. However, in 2019 the Supreme Court ruled that platform operators could face legal liability if they do not maintain sufficient moderation mechanisms for comments left on websites and online forums.
Law 1450; Article 56(2) (2011)
Nicaragua has criminalized the publishing of fake news on social media or in news outlets. Failure to comply risks a prison term of up to six years.
The Special Cybercrime Law (2021)
Mexico does not currently have any legislation in place regarding online disinformation.
China requires companies responsible for the information published on their platforms. They must also operate content management mechanisms, have clear rules on content governance and response systems to detect ‘rumors’ and disinformation, and receive user complaints and reports. Failure to comply carries civil and criminal liabilities for operators.
Cyber Security Law (2017)
Cambodia’s internet regulation stipulates that websites that publish disinformation or “provoke, create chaos, damage national defense,” risk fines of $1000. The government can block information, issue criminal punishment for disinformation, and suspend operators’ services who do not comply. Data on network information is to be held for one year on request.
Ministerial Directive (2018); National Internet Gateway (2021)
Indonesia requires social media companies to prevent users from spreading “restricted content” online; this includes blasphemy, disinformation, and misinformation. Failure to comply risks fines of between €580 and €2,800 and suspension of access to the country. Platforms are required to disinformation content within 24 hours of notification.
Digital Platform Law PP PSTE no. 71 (2019); Indonesia Constitutional Court Ruling (October 2021)
Eswatini created the National Cybersecurity Advisory Council to regulate cybersecurity. The country criminalized the publishing as true, a false, forged, altered or counterfeit record, instrument, or other writing, knowing it to be false, altered or counterfeit, with the intent to injure or defraud.”
Malaysia has criminalized the dissemination of “fake news” related to COVID-19. It risks a prison sentence of up to three years and a fine of up to RM100,000. Platforms must provide user data for investigations or face the same fine.
Emergency Essential Powers No. 2 Ordinance (2021)
Singapore requires platforms to limit the spread of disinformation by displaying corrections or removing false content. Platforms should take measures to detect and safeguard against coordinated inauthentic behavior, bots, and other activity. They must flag paid political ads. Failure to comply risks fines of up to $1,000,000 SGP (c. $725,000,000).
Protection from Online Falsehoods and Manipulation Bill (2019)
Vietnam requires social media platforms to remove disinformation within 24 hours of an order. It criminalizes sharing false information to incite protests, riots, terrorism, or other content that “opposes the State.” Failure to comply risks up to 12 years in prison.
Article 117 of the Vietnam Penal Code (2015)
Pakistan has ruled that social media platforms must suspend or disable access to accounts or online content of citizens of Pakistan and those outside its territorial boundaries that spread fake news or defamation that violates the religious, cultural, ethnic, or national security sensitivities of Pakistan. Platforms must also add a note to content explaining that it is false. Users that post disinformation about the military or judiciary or other public official risk 5 years in prison, without the option for bail.
Citizen’s Protection (Against Online Harm) Rules (2021); Removal and Blocking of Unlawful Online Content (Procedure, Oversight and Safeguards) Rules (2021)
Bangladesh requires criminal prosecution of anyone who disseminates disinformation online. Section 25 criminalizes the spread of disinformation intended to impact the country’s reputation. Failure to comply risks a prison term of up to three years, a fine of up to 300,000 Tk (c.$2,920), or both.
Digital Security Act (2018)
India requires that internet platforms must remove content that “knowingly and intentionally communicates any information which is patently false or misleading in nature but may reasonably be perceived as a fact” within 36 hours of notification. Regulations require platforms to reveal the identity of the user who shared the content.
The Republic of India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules (2021)
Sri Lanka does not have a specific law that regulates online disinformation. Instead, the country relies upon the Telecommunication Regulatory Commission (TRCS), which orders internet service providers to block access to specific domains for hosting destabilizing content.
Australia requires signatory platforms of the Code of Practice on Disinformation and Misinformation to develop policies to prevent the spreading of false information. Signatories must suspend or disable offending and fake accounts, including bots that automatically disseminate information across their platforms
Code of Practice on Disinformation and Misinformation (2021)
New Zealand allows platforms to escape liability for false information if it is removed within 48 hours of notification, and they have in place a transparent flagging process.
Harmful Digital Communications Act (2015)
The UK does not have specific legislation regarding disinformation. However, the country’s Online Safety Bill, which was published in March 2022 will create a Duty of Care for platforms to remove false and misleading content.
The First Amendment guarantees the right to freedom of speech. Accordingly, in the US, platforms hold a liability exemption for user-generated content and shield platforms from a legal obligation to remove false content. However, in the State of California, it is illegal to publish audio, imagery, or video that gives a false, damaging impression of a politician’s words or actions
Section 230 of the Communication Decency Act (1996); AB 730 Elections: Deceptive Audio or Visual Media (2020)
Canada prohibits the publication of false or misleading statements about an electoral candidate to affect an election result. This includes false statements regarding a candidate’s withdrawal from an election. It prohibits the online impersonations of politicians—other than for parody or satire.
Canada Elections Act (2018)
Turkey requires platforms to remove disinformation (that “endangers the country’s security, public order…”) within four hours or receipt of take down order by a court or the Information and Communication Technologies Authority (ICTA). Failure to comply risks platform throttling and three years in prison. Platforms also risk fines if their algorithms amplify disinformation. Messaging platforms must provide user information to ICTA upon request.
Amendment to the Press Law (2022)
Switzerland does not have any regulation governing online disinformation in place.
Russia requires that internet service providers will restrict access to websites containing calls for extremist activities. Failure to comply with legal entities risks fines of between ₽3,000,000 and ₽8,000,000. The Russian regulator Roskomnadzor can request the courts to block platforms that host content that incites national, class, social, or religious intolerance. Users spreading disinformation that “disrespects” the state or causes a major disturbance, risk fines of up to 1.5 million rubles and up to 15 days in jail.
Law on Mass Media (1991); Federal law N511-FZ (2020); Federal law N27-FZ (2019); Federal law N31-FZ (2019)
Belarus criminalized the livestreaming of ‘unsanctioned’ protests. The Ministry of Interior can block access to an internet resource without a court order. If an outlet has received two written warnings in one year or publishes content that threaten national security it can be blocked.
Law on Mass Gatherings (2021); Mass Media Law (2021)
Norway has a robust Freedom of Expression law, and there is no current legislation to counter online disinformation. However, according to Section 135a of the Norwegian Penal Code, any person who willfully or through gross negligence publicly utters a discriminatory or hateful expression risks fines or imprisonment of up to three years.
Section 186 of the Norwegian Penal Code (2008)
In line with EU law, Very Large Online Platforms (VLOPs) must demonetize disinformation content; take measures against bots, fake accounts, manipulation campaigns, account takeovers; empower users to flag disinformation; and report data to the EU. Non-VLOPs must put in place risk-mitigation strategies to combat disinformation.
Austria’s national law requires platforms to remove false or defamatory content within 24 hours of notification (or 7 days in more complex cases). Failure to do so incurs a fine by the Austrian courts of between €10,000 and €58,000
EU Code of Practice on Disinformation (2018); Communications Platform Act (2022); EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
EU Code of Practice on Disinformation (2018); EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
Iceland has criminalized the dissemination of incitement to violence, hatred, or discrimination against a person or group of persons due to their group characteristics.
International Covenant on the Elimination of All Forms of Racial Discrimination (1965); Additional Protocols to the Convention on Cybercrime (2003)
Under Romanian national law the creation of accounts for online impersonation is a criminal offense and so social media and other platforms must remove these accounts.
The National Authority for the Administration and Reglamentation of Communications, ANCOM, has powers to block access to any online news platform that publishes content “promoting fake news regarding the COVID-19 evolution and the protection and prevention measures”.
Law 286/2009 (2014) EU Code of Practice on Disinformation (2018); National Authority for the Administration and Reglamentation of Communications 2020; EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
Under German national Law defamatory content must be removed within 24 hours of notification, and platforms must produce bi-annual transparency reports for evaluation.
Failure to comply risks fines of between €500,000 (c.$514,000) and €5,000,000 (c.$5,140,000).
Network Enforcement Act (2017); EU Code of Practice on Disinformation (2018);EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
Under French national law, the courts can order the removal of misinformation (such as that shared by bots), or the delisting of a website that hosts misinformation within 48 hours. This power is given to the courts three months before an election
EU Code of Practice on Disinformation (2018); Law Against the Manipulation of Information (2018); EU Strengthened Code of Practice on Disinformation 2022; Digital Services Act (2022)
Under Portugal’s national law the state is required to protect citizens from people who produce, reproduce, and disseminate misinformation. Citizens can file complaints to the media regulator.
EU Code of Practice on Disinformation (2018); Law Against the Manipulation of Information (2018); Charter on Human Rights in the Digital Age (2021); EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
Under Greek national law the sharing of any disinformation that causes fear, harms the national economy, defense, or public health risks a prison sentence of up to five years. This applies to the director or owner of the outlet publishing the information.
EU Code of Practice on Disinformation (2018); Article 191 of Penal Code (2021); EU Strengthened Code of Practice on Disinformation (2022); Digital Services Act (2022)
Saudi Arabia criminalized the production, preparation, and transmission of content threatening ‘public order,’ which includes misinformation, on social media. This means that individuals sharing such content risk fines of up to 3 million riyals (c.$800,000) and a five-year prison sentence.
Saudi Arabian Anti-Cybercrime Law (2007)
United Arab Emirates criminalized the publishing of online false information. Failure to comply risks one year in prison and a minimum Dh 100,000 ($27,225) fine, or two years and a minimum Dh 200,000 ($54,550) fine if the crime was committed during a pandemic, emergency, or crisis. Use of bots to share, reshare, or circulate fake news can lead to a prison term of two years, or a fine ranging between Dh 100,000 ($27,225) to Dh 1,000,000, ($272,258) or both.
Federal Decree-Law No. 34 (2021)
Kazakhstan criminalizes the dissemination of knowingly false information. Noncompliance risks a prison term of up to seven years. Kazakhstan does not uphold freedom of the press and has pursued criminal investigations against media outlets for publishing disinformation.
Article 274, Part 3 of the Criminal Code (2014)