Prebunking: The Key to Getting Ahead of Midterm Misinformation

By
October 27, 2022
Midterm Elections

With midterm elections quickly approaching, Trust & Safety teams on every major platform face the same conundrum: quickly minimizing the spread and effects of election-related misinformation. While in the past platforms have been reactive to these trends, in the last few years there’s been a more serious effort towards prebunking, or dispelling misinformation before it takes root. A comprehensive strategy to bolster prebunking and reduce misinformation needs to take into account a variety of different tactics.

Prebunking: Like Snopes, But Not Really

Unlike debunking, which involves reactive measures to expose false statements, prebunking aims to dispel misinformation before it spreads. It isn’t an entirely new concept, but it’s one that has yet to be fully fleshed out in terms of how platforms implement it as a policy. As a practice, it began to develop after the 2016 presidential election, in response to the organized disinformation campaigns and the ‘infodemic’ that took hold of the country and its most-used platforms. Prebunking gained some momentum during the Covid-19 pandemic, particularly regarding vaccines. With every major event or trend, there’s likely to be swarms of misinformation that run rampant on platforms, and prebunking is one excellent strategy to combat this phenomenon.

Typically, prebunking involves labeling, an integral part of policy enforcement, though it’s not limited to this. Platforms may use automation tools to track relevant keywords and add a warning, a piece of context, or links to external organizations with further information about a particular topic. YouTube, for example, has teamed up with Google’s Jigsaw team to produce prebunking videos on a number of topics, including the war in Ukraine and Covid-19 vaccines. Alongside the videos, YouTube also provides links to organizations like the World Health Organization, which provide accurate information for users, for a well-rounded approach to prebunking as a comprehensive policy. Twitter began prebunking back during the 2020 election, providing users with messages at the top of their feeds about specific election-related information. The platform’s goal was to ensure that all US-based users were being properly informed about voting so as to dispel myths about election fraud. Efforts like these indicate that dispelling rumors, myths, and potentially dangerous information before it spreads may, in fact, be possible.

Prebunking Ahead of Election Season

It goes without saying that prebunking came about as a reaction, not as a proactive strategy. Seeing the widespread disinformation campaigns that were rampant during the 2016 election and the continued spread of misinformation in the years since on a variety of topics, it’s clear that combatting this problem proactively is the best way forward.

This election season, platforms have done some of the following:

  • Twitter: Tweets related to “elections and civic events” that contain misleading information will be “labeled with links to credible information or helpful context” and will prevent certain labeled tweets from being liked or shared so as to stop the spread of misinformation. The platform also presents users with prebunks, which offer information about elections, voting, and misinformation.
  • Meta: The company has not released specific policies or guidelines with regard to dispelling misinformation or prebunking. While they do work with independent fact checkers, it remains unclear what part of that cooperation, if any, is part of a prebunking strategy.
  • YouTube: YouTube has a robust prebunking strategy, which includes its video series with Jigsaw, outlined above. It also has a three-strike policy for violative accounts that spread misinformation thrice over a 90-day period.

With the US midterm elections quickly approaching, platforms should have implemented their prebunking strategies yesterday. That being said, there’s always work to be done. Ensuring users are being presented with accurate information is an important step for platforms to take to guarantee trustworthiness.

Creating a Comprehensive Prebunking Strategy

Whether by producing videos, suggesting links to reputable sources of information, or taking some other route, platforms should have a strategy for prebunking. One of the simplest ways to incorporate it into a greater content moderation policy is by labeling. By understanding the discourse around a particular topic, Trust & Safety teams can train their automation tools to flag specific keywords or phrases. Those can then be sent to moderators for review, and from there a policy will dictate what’s to be done with them.

But this is only the first step. Language changes, and on the internet, it’s done at nearly lightning speed. It’s no secret that users of a variety of platforms employ euphemistic language and codewords – or what’s known as ‘algospeak’ – in order to fly under the radar of moderation teams. Only by incorporating intelligence that detects those types of linguistic choices and changes can platforms truly combat misinformation before it takes hold. For precisely this type of problem, companies need to be able to constantly monitor different sources of harmful chatter, which might be occurring on or off a platform. Solutions like ActiveFence’s provide real-time access, updates, and analysis about these types of occurrences, enabling platforms to get ahead of trends and implement policies to prevent their spread.

Platforms not yet working with fact-checking organizations should also explore this as another prong in their prebunking strategy. Twitter, for example, publicized that it works with 10 fact-checking organizations, including 5 which work in Spanish, for maximum coverage. Other tools include publicizing links to reputable sources of information, offering context for political or health-related advertisements, and even explicitly refuting misinformation trends.

Table of Contents