That Doesn’t Sound Right: How Harmful Content Causes Churn in Audio Streaming

By
May 11, 2023

For audio streaming platforms, the undesired outcomes of harmful content include user and creator churn, legal liabilities, and negative press attention. But as audio streaming platforms grow, detecting and stopping this abuse becomes a challenge of scale, speed, and expertise. In this blog post, we will outline the major content risks for audio streaming platforms, their consequences, and proposed solutions.

The harmful content mix

Harmful, illegal, and otherwise violative content is not a new problem. In fact, internet service providers and user-generated content platforms have been dealing with various forms of online harm pretty much since the launch of the internet.

In audio streaming platforms, however, this content takes on unique qualities. The audio-first nature of audio streaming platforms can mislead trust & safety teams to think that harmful content is only found in the audio files themselves. And while it may be the case that most of the harmful content is in audio, additional risks lie in the file’s metadata (like track and user names), images (like album covers), and reviews. Additionally, the abuse areas that impact audio platforms are distinct, spanning both offensive and illegal content:

  • Hate speech: Audio platforms can be used to share violent messaging, and extremist manifestos. Track and user names can contain extremist keywords (like obfuscated neo Nazi references), and cover images can contain swastikas and other related imagery.
  • Disinformation: Podcasts are often used by disinformation agents to spread false and misleading content in order to promote agendas, influence elections, and generate social unrest.
  • Terrorism: Terrorist organizations frequently use audio sharing platforms to promote violent nasheeds (hymns that promote terrorist activities), which would be hard to detect without specialized knowledge. This type of content is not only dangerous, but also illegal, and its removal is generally required within 24 hours of notice.
  • Self harm and Suicide: Frequently shared in groups promoting self-harm and eating disorders are playlists, songs and subliminal tracks that encourage this type of activity by providing encouraging messaging and helping listeners overcome hunger.
  • Copyright infringement: The unwarranted use of copyrighted material by music streamers presents a risk for the content creators, as well as the audio streaming service itself

Example of a subliminal audio track used to encourage eating disorders

Turning up the volume

When harmful, offensive, and illegal content exists on a platform in smaller quantities, it is generally manageable by a smaller content moderation team. Smaller, less sophisticated operations can rely on reactive detection (responding to user flags), and manual human review to keep audio streaming platforms safe.

However, as these streaming platforms grow, so too does the volume of potentially violative content that trust & safety teams are expected to handle. And using the same methodology that worked for a lower volume of content often leaves these teams with mounting piles of user-flagged items to review. Moreover, these vast numbers of content may require specialized knowledge and linguistic capabilities that smaller moderation teams simply do not have.

Amplifying the risk

When high volumes of violative content are not handled, that content ultimately surfaces in user feeds, amplifying the potential risk for platforms. This risk can be broken down into three main categories:

  • Business risk: A core success metric for audio platforms is user retention and continued use. However, listeners that come across illegal or offensive content are likely to feel unsafe, which may result in platform abandonment and user churn, for both listeners and content creators.
  • Brand risk: Regardless of platform type, harmful content tends to draw negative attention in the press. Poor handling of such content is likely to generate a brand risk, as a platform that isn’t handling abuse in an efficient manner, can be labeled as unsafe in the media.
  • Legal risk: Certain types of harmful content – especially terrorist content – which is often shared on audio streaming platforms, carries significant legal risk, if not handled promptly. For example, in the EU, platforms are required to remove terrorist content within one hour of being notified by legal authorities.

The solution for audio streaming

As with any multifaceted problem, the solution to the audio streaming content problem has several components. Teams need to find efficient ways to proactively detect platform risks, and moderate high volumes of audio, visual, and text content in multiple languages and abuse areas. Traditionally, this would require sophisticated mechanisms and highly specialized teams – an expensive and complex endeavor. To keep users safe while avoiding additional costs, trust & safety teams should consider:

  1. Proactive detection: Instead of waiting for users to flag content, necessarily exposing them to malicious content and generating risks, platforms should explore proactive detection – that is, to identify violative content, before it becomes a problem. This type of proactive detection is made possible by audio media matching, image recognition, and text-based content detection.
  2. Specialized AI: By using especially-trained AI to review content and score it, smaller trust & safety teams can evaluate risks that they wouldn’t otherwise know exist. Imagine an English-speaking content moderation team, responsible for moderating highly nuanced violations like Islamist nasheeds: AI enables them to understand the content risk, without understanding the content itself.
  3. Operational improvements: Creating organized, preferably automated processes for handling content will allow teams to action on higher volumes of harmful content faster, reducing business, brand, and legal risks.

While teams could implement these improvements on their own, dedicated solutions, like ActiveFence’s Content Moderation Platform support these initiatives faster and in a more cost-effective way.

Our solution for audio streaming platforms includes automated harmful content detection across all media types, surfacing malicious content across abuse areas before it ever reaches user feeds and a Content Moderation Platform with a dedicated moderation UI and automated workflows, to make faster, smarter moderation decisions. Our content detection is based on an intel-fueled, contextual AI that provides you with explainable risk scores based on the aggregate knowledge of a large, specialized team without having to hire your own subject matter experts.

See for yourself how ActiveFence helps audio streaming platforms like SoundCloud and Audiomack ensure the safety of their users and platforms by requesting a demo below.

Table of Contents