Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
In the third edition of the ActiveFence Policy Series, we examine the core components of the health and electoral disinformation policies of major tech companies. In this blog, we will focus on how platforms combat the spread of electoral disinformation.
The risk from misleading or false information has been significantly amplified in recent years. The world is witnessing more and more large-scale, coordinated information campaigns powered by the proliferation of multiple unregulated ‘news’ outlets. Some recent examples were seen during the 2016 UK Brexit referendum and US presidential election, the 2017 French presidential election, the 2018 Taiwanese local elections, and the 2020 US presidential election. After witnessing the discourse that took place during the 2020 US presidential elections and the violent aftermath, it is clear that political disinformation has reached new heights. As much of today’s disinformation is spread online using tech platforms, many of these companies have drawn up guidelines and policies in response to the abuse of their services. While some platforms are confident to remove material that they determine to be fake, others flag potentially problematic material to warn their users.
In ActiveFence’s third edition of our Policy Series, we provide a thorough evaluation of the policies of twenty tech platforms that have implemented policies related to electoral disinformation and civic processes. As in our first and second articles outlining tech platforms’ policies, in this article, we provide an overview of how different online platforms navigate election disinformation.
Policy Challenges
Online platforms of all sizes have created comprehensive community guidelines and policies. Referred to by different names—community guidelines, content policies, and trust & safety policies—all create the ground rules for platform use, outlining what can and cannot be done to develop transparent processes keeping users and platforms safe.
Creating these policies is a complex task, requiring a thorough evaluation of brand values and intended platform use, an understanding of on-platform activities, monitoring of international legislation, and ongoing analysis of best practices among similar technology companies. Additionally, each platform category has its own set of complications due to the varying types of user-generated content presented.
Social Media Platforms
While false and misleading information is an ever-present problem faced by platforms, it becomes particularly pervasive during election cycles. In 2021, Frontiers in Political Science published “Social Media, Cognitive Reflection, and Conspiracy Beliefs,” which found that the use of social media as a news source directly correlates to the likelihood of endorsing conspiracy theories.
As social media platforms are frequently used to share information, they continuously develop guidelines to assist their moderators in keeping users free from deceptive content. While some platforms have developed specific policies, others work with fact-checkers and external organizations to verify claims made during elections.
Instant Messaging
Differing from social media, where user-generated content is mainly generated for large viewership or public consumption, instant messaging platforms are generally used for smaller group communications between individuals who are already in contact. As a result of the closed nature of the conversations, community guidelines are less specific than those produced by social media companies, with a focus on ensuring that users do not impersonate others or misrepresent the source of a message.
Video Sharing
Video sharing platforms are a popular source of news and information, with reputable news networks uploading clips and reports from their daily news shows each hour. In addition to the legacy media accounts, there is a multitude of online commentators, comedians, independent journalists, and influencers who are also active in the conversation about current affairs.
As a result of the number of accounts engaged in sharing media relating to current affairs and elections, video sharing platforms have put in place guidelines to regulate the content and prevent disinformation from being spread unchallenged via their platforms. While some platforms are more specific in their content policies regarding disinformation, others take a broader approach, utilizing more catch-all terminology.
File Sharing
File sharing platforms are often used as a central component in the infrastructure necessary to share disinformation content at scale and across a range of online platforms. Aware that their services could be abused to store materials that could be weaponized to attack the legitimacy of civic processes and elections on other platforms, many file sharing platforms have enacted a number of content prohibitions.
The Ongoing Challenge
These complex and sensitive issues continue to evolve as the online world, behaviors, and political climate change. Due to the challenge of navigating these changes, ActiveFence’s research team continues to monitor all relevant changes and developments in the trust and safety ecosystem.
Our third report in ActiveFence’s Policy Series details twenty of the biggest platforms’ election disinformation policies to equip Trust and Safety teams with the information needed to tackle electoral disinformation.
For the comprehensive report detailing guidelines and examples of election disinformation policy, download our report.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Learn why precision and recall aren't enough for evaluating detection models in trust and safety, and understand how automation and user-focused metrics allow for for more efficient content moderation.
Explore this comprehensive guide on Trust and Safety, covering everything from user protection to content moderation. Learn how to create safe, trusted online environments.