Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
The Trust & Safety industry is currently deep into discussions around the EU’s Digital Services Act (DSA), trying to understand upcoming obligations and mapping out how to comply. Compliance will take many forms, from improving harmful content detection and internal processes to tracking moderation measures. Furthermore, the Digital Services Act applies differently to various online entities, requiring each platform to evaluate its needs. The Act defines four categories of online services: intermediary services, hosting services, online platforms, and very large online platforms (VLOPs). Each platform must approach the Digital Service Act differently.
The DSA forces the industry to examine user safety more closely. With obligations for platform development, transparency, and additional risk mitigation measures, DSA compliance requires a holistic approach.
Pushing platforms to be more aware of Trust & Safety throughout the product lifecycle, the DSA emphasizes three areas of safety: platform development and maintenance, transparency and accountability, and risk mitigation. Here, we review these buckets of focus.
A well-known principle within Trust & Safety is safety by design, which places user safety at the center of product development. Key to building a safe platform, this guiding principle ensures that user safety is prioritized at the very beginning of a product’s creation rather than as an afterthought. This principle applies not only to new platforms but also to existing platforms with either new features or the ongoing assurance that products are maintained and updated with safety at the forefront.
Below is a partial list of DSA requirements for the platform itself, highlighting the importance of safety by design. For a full list of DSA requirements and to whom they apply, download our guide to the Digital Services Act.
All online platforms must have internal, user-friendly mechanisms built within their product to allow users to
Online platforms must be designed to meet product safety obligations and facilitate compliance. This includes platform interfaces that grant users autonomy over their decisions.
Ads cannot target children, nor can they target individuals based on matters such as sexuality, ethnicity, religion, or any sensitive data.
The industry is witnessing regulators worldwide begin to hold online platforms accountable for their actions. The DSA is not only creating a legal environment where platforms must answer to governments but to their users as well. The DSA includes several requirements concerning transparency, including the following:
All online services must publish transparency reports at least once a year, sharing their moderation activities against violative content.
All online services must provide information on specific content moderation policies and enforcement practices related directly to fundamental rights, including freedom of expression and data protection, in their terms and conditions.
Platforms must clearly inform users how their recommendation systems operate:
Platforms also must give users the option to opt-out of receiving recommendations.
Similarly to the above requirement, platforms must inform users about the ads they receive and allow them easily opt-out.
Platforms must share the following information about ads received in real-time:
Trusted flaggers are, according to the legislation, “entities that have demonstrated particular expertise and competence.” This includes individuals from third parties such as law enforcement or NGOs. Trusted flaggers will be able to report illegal content through a direct channel to platforms, prioritizing action priority for action.
The DSA places additional requirements on VLOPs. These obligations are risk mitigation measures that likely require more extensive measures to detect harmful content, including AI, threat experts, off-platform intelligence, and more. They include the following.
VLOPs must audit their risks and mitigation efforts not only internally but externally as well. Additionally, data must be provided to authorities and vetted researchers upon request.
The DSA creates new frameworks for assessing and responding to the spread of illegal content on platforms and services. This includes the requirement for VLOps to formally assess their risks and implement procedures and mechanisms into their Trust & Safety practices to address risks and any evolving crises.
As a whole, the DSA calls for all platforms to be more accountable and responsible for the risks they pose, with a specific emphasis on the consequences of the spread of disinformation surrounding manipulation of electoral processes. The DSA proposes a risk mitigation strategy called the EU Disinformation Code of Conduct. As of December 16, 2022, VLOPs must follow the Code of Conduct. However, the DSA requires non-VLOPs to implement risk mitigation measures and proposes the code as an exemplary mitigation approach.
Please refer to ActiveFence’s Disinformation Code explainer to learn more.
The DSA adds a sense of urgency and opportunity for platforms to consider user safety from multiple perspectives. Already in force, companies must take a multi-pronged approach to DSA compliance. By February 2024, all regulated entities will be obligated by the DSA. Preparations should include evaluating and improving platform policy, content moderation efforts, transparency, and more.
All online platforms will require different measures as they serve different audiences and purposes. Luckily, ActiveFence’s end-to-end Trust & Safety solution tailors to platforms’ needs with a customizable content marketing platform and threat-specific intelligence.
ActiveFence’s platform provides analytics of all activities so teams can easily produce transparency reports and share data with researchers and government authorities.
ActiveFence’s content moderation platform allows teams to customize rule-based workflows to automate efforts, such as user strike systems, high-risk content, or items flagged by trusted flaggers.
The DSA emphasizes countering specific threats, such as disinformation or election manipulation, cyber violence against women, or harm to minors online. ActiveFence’s threat-based solutions specialize in these areas and identify and analyze threats in every corner of the web by monitoring over 10 million sources of online activity and chatter across platforms and content formats.
ActiveFence strengthens platform defenses by providing in-depth insights into platform vulnerabilities to help teams identify blindspots and policy loopholes.
With customized moderation workflows, Trust & Safety teams can automatically notify users of moderation decisions, such as actions on user reports, appeals, and violations.
The DSA applies to all 27 countries in the EU. ActiveFence covers over 90+ languages and leverages regional and linguistic expertise to ensure user protection worldwide.
Trust & Safety teams will need different tools, harmful detection methods, and software. ActiveFence orchestrates all Trust & Safety operations, placing DSA compliance measures under one roof. To learn more about the DSA and how to comply, download our guide.
ActiveFence explores the true cost of online gaming fraud, as we delve into how Account Takeovers (ATOs) fuel an underground economy of exploitation.
Discover the latest advancements in ActiveOS and ActiveScore designed to elevate moderation efficiency and ensure community safety.
ActiveFence shares what steps gaming platforms can take to safegaurd one of their largest user bases-women.