ActiveFence has identified bad actors exploiting podcasting and music hosting services in order to spread harmful content online. Our unique cross-platform abilities allow us to assist our partners in countering these threats. This blog post will analyze some of the tactics and techniques employed, with the complete report, available for download below providing more context and examples.
The podcasting phenomenon is upon us. In an era long characterized by short soundbites on mainstream news outlets and videos on social media platforms, there has been an alternative trend towards in-depth content. As of January 2021, there were over 2M podcasts registered by Google, with more than 43M episodes in over 100 languages. This explosion of content provides people worldwide with the ability to connect and join communities and conversations specified on shared interests, be they around politics, religion, architecture, fashion, literature, or even UFOs, myths, and legends. However, alongside these legitimate pursuits, bad actors have also seen an opportunity to ride the wave, and spread their toxic ideologies unfiltered into the ears of listeners around the globe.
What Is The Danger?
While the vast majority of podcasts shared across podcasting platforms are innocent, ActiveFence has found that podcast directories can also be targeted for abuse by dangerous organizations promoting harmful content that can reach millions. Mainstream podcast and music platforms are inadvertently hosting, and therefore promoting, noxious content ranging from Islamist terror, far-right extremism, and disinformation organizations to name just three examples.
The consequences of the proliferation of the content are two-fold. The first is that impressionable users are vulnerable to radicalization by content found on mainstream platforms, inflicting societal damage, promoting social breakdown, and potential inter-group violence. The second consequence is that the presence of harmful material will harm the reputations of, and traffic to these legitimate platforms. While attempts are made to moderate, bad actors are employing a range of tactics to maintain connected and undetected.
How Does Bad Content Go Undetected?
To understand the tactics of bad actors using podcasting platforms as broadcasting vectors, we analyzed the activities of harmful and often dangerous organizations across multiple online platforms. Our systems followed their communications from untracked sources, into legitimate spaces. Identifying some of their harmful content that pervades mainstream platforms, we have divided the strategies of avoiding detection in two: hiding messages in song, and concealment of content via opaque descriptions.
Hiding Messages In Song
A popular tactic used to disseminate messaging of banned, or censored groups is to upload messages to mainstream podcasting and music sharing platforms as songs. Songs have long played a role in both identity transmission and reinforcement, the most obvious example being national anthems, popularized in the 19th Century. Two examples of this application, identified in our research are Islamist terrorist affiliates and proponents of the QAnon conspiracy theory. Both movements produce and share songs that act as propaganda for new and potential initiates, and reinforce group cohesivity online. The use of song as the medium of messaging, provides the added benefit of shielding content from moderators, since it is harder to interpret and vet their meanings.
One example is the original song WWG1 WGA, recorded and uploaded by QAnon conspiracy theory promoters. The song’s title is an abbreviation of the mantra “Where we go one, we go all”, which is written as an anthem for the QAnon movement. This is one of many original songs by QAnon propagandists, which have been uploaded to multiple mainstream audio platforms. These audio files remain accessible today despite attempts to remove QAnon content from social media and podcasting platforms. These songs are uploaded by seemingly different artists and then collated together into playlists. These files are not harmless, they assert the veracity of the disinformation movement’s claims, and urge listeners to act out in support of a nefarious cause.
In a similar fashion, radical Islamists are uploading nasheeds (traditional Islamic vocal music) to mainstream podcasting platforms. The majority of nasheeds are spiritual compositions, however, these tracks glorify, encourage, and threaten violence and terrorist acts. These files, some of which praise specific terror organizations such as ISIS, Hamas, and Hezbollah are hidden in and amongst legitimate tracks. They are found on a wide array of podcasting platforms. One prominent and widely spread nasheed, produced by a media wing affiliated with ISIS, translates to Let’s Go For Jihad. The nasheed is recorded in German and urges listeners to take up arms to join ISIS and its Jihad.
Similarly, songs uploaded by bands supportive of the Gaza-based terror organization Hamas praise specific terrorists by name, and encourage listeners to carry out further acts of terrorism in the future. The fact that these messages are put to music, and that they are recorded in non-European languages makes it difficult for non-specialist moderators to assess the content. The links to these audio files are shared on Telegram and other alternative messaging platforms’ groups, which promote Islamist terrorism. Concerningly ActiveFence has identified a tactic by content creators to use the ‘Album Name’ field to share the web address of specific Islamist-terror groups on Telegram. These songs are therefore not only rallying material for existing supporters but also active recruiting tools.
Concealment & The Use Of Surrogates
Much of the work of moderators working at scale involves finding harmful content using keywords. However, content creators are countering this technique by omitting specific references to their audio’s content in the textual description. Using our cross-platform analysis, ActiveFence found that rather than describe the content on the podcasting platforms, creators explicitly describe and link to it on social media accounts, which are promoted across relevant communities. In using multiple platforms, bad actors can avoid on-platform moderation, facilitating the harmful material to remain on the sites discretely, while still finding and building an audience.
The promotion of radical organizations, be they religious fundamentalist, racial and cultural supremacist, or conspiracy theorists requires them to be able to reach new and potential recruits. Once identified these individuals can be groomed by operatives and programmed into agents of chaos and violence. ActiveFence’s unique cross-platform and specialist research system enables us to locate this harmful material, and understand the techniques and tactics employed by dangerous propagandists.
The information detailed in this article is just a sample of the content found across all mainstream podcasting platforms, with the complete report, including on-platform examples, available below. The content currently hosted is shocking and poses a risk to users, wider society, and the platforms themselves. To tackle this menace requires an approach to moderation that uses sophisticated intelligence gathered from across the internet.
ActiveFence is always working to locate threats, and the bad actors from which they originate, to help our partners to keep their platforms as the safe spaces they are intended to be.