Content moderators are a core – but also costly component of any trust & safety team. Learn about the risks, and solutions to this unique challenge.
ISIS’s move from physical jihad to digital warfare has marked a new chapter in the group’s expansion efforts.
Artificial intelligence represents the next great challenge for Trust & Safety teams to wrangle with.
2022 was a landmark year for Trust & Safety regulation and legislation around the world.
Alongside more robust technologies like AI and NLP, user flagging is a feature that should be in every Trust & Safety team’s strategy for platform security.
Transparency reports are becoming more important for platforms to publish – both from a legal and public relations perspective. We share how to get started.
Platforms are trained to detect graphic CSAM, but its non-graphic counterpart often goes unnoticed, leaving users vulnerable.
A searchable interactive guide to the legislation of almost 70 countries that govern online disinformation.
Despite popular discourse, there are clear distinctions between censorship and content moderation.