Launch agentic AI with confidence. Watch our on-demand webinar to learn how. Watch it Now
Real-time visibility, safety, and security for your GenAI-powered agents and applications
Proactively test GenAI models, agents, and applications before attackers or users do
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
The Trevor Project sought proactive protection from harmful content. ActiveFence helped automate moderation, reducing reliance on user flags.
As a non-profit, The Trevor Project must effectively use its limited resources to make a big impact. Given the amount of content posted to TrevorSpace, the company’s peer-to-peer platform, on a daily basis, it's impossible to monitor every interaction happening on the platform.
Prior to ActiveFence, the team relied on manual moderation, historical systems, and user flags to catch content that violated their policies.
The team was looking for ways to address violative content in a faster manner. In the case of suicide and self-harm, it’s crucial to ensure they take action quickly to help a user access life-saving care in a timely manner.
So, when launching TrevorSpace, they understood the importance of implementing safety by design. Yet, after its launch the popularity of the site required a vendor to help them reduce reliance on user flags, and automate their content moderation efforts for specific abuse areas, in order to take action on harmful content in a more operationally efficient way.
In line with their mission to prevent LGBTQ+ youth suicide and self harm, The Trevor Project needed to find a content moderation vendor that could cover the violations that are critical for them, specifically, harassment & bullying, hate speech, child solicitation, suicide and self-harm. Not only was violation coverage important, but the quality of the models was crucial. In an effort to reduce undetected content, they turned to ActiveScore, ActiveFence’s contextual AI automated detection capabilities to solve this challenge.
To strike a balance between providing a safe space for the community while allowing the necessary freedom for community expansion, The Trevor Project needed a partner to implement their warnings and penalties guidelines quickly and effectively on TrevorSpace. When a user on TrevorSpace violates a guideline and our moderation team becomes aware of it, users will be issued warning points. TrevorSpace leverages ActiveOS codeless workflows to automatically implement these policies. For example, anyone with 0-5 points will automatically receive a warning, anyone with 6-7 points will receive a two-week suspension, and those with over 8 points will be permanently banned. They also use ActiveOS’s moderation queue management to manually moderate community messages with greater efficiency.
By using ActiveFence, The Trevor Project is able to ensure greater protection against the most egregious harms facing their community on the TrevorSpace platform. This includes customizing ActiveScore hate speech models to identify relevant keyword lists that would remove words commonly used among the LGBTQ+ youth community, aligning it to their policy.
By incorporating a proactive approach to moderation, they have moderated thousands of forums on the platform and ensured that their users have a safe space to discuss the issues that matter most to them.
Tommy Marzella
VP, Social Platform Development & Safety
Find out how ActiveFence helps major social media platforms fight harmful content and maintain community trust.
Learn how Cohere partnered with ActiveFence to enhance trust & safety across their platform.
Discover how Udemy uses ActiveFence’s solutions to safeguard learners and educators worldwide.