Why Everyone is Right to be Worried About Misinformation

By
February 1, 2024

ActiveFence helps stop misinformation and secure election integrity

Learn how

As we head into 2024, one topic is at the top of mind for policymakers, regulators, and those working in national security: the rapid spread of false and misleading information. Indeed, the World Economic Forum’s recently released Global Risks Perception Survey 2023-2024 ranked the threat from misinformation and disinformation to be the most significant risk. This concern puts increased pressure on Trust & Safety teams to assess and adequately handle misinformation on their platforms quickly.

The ranking should come as no surprise, given our global society’s vulnerabilities to major geopolitical events (war, international relations, and national elections) coupled with the continued process of social fracturing.

Handling this confluence of events is now more challenging than ever due to the reduction in resources that began in 2023 and continues today, alongside the democratization of AI-generated content creation that has amplified effective and harmful content transmission.

Understanding the Misinformation Risk

Misinformation can sway public sentiment and inspire or reduce appetite for social participation – effectively moving the needle in elections. As was seen in the US in 2020, Germany in 2022, and Brazil in 2023, it can lead to violent escalations that jeopardize the smooth functioning of democracies.

Understanding trends in misinformation, both the narratives being pushed and the means of their dispersal, is an essential capability for Trust & Safety teams active in this complex global context under stress.

The field of misinformation is incredibly live, and a major challenge for platform policymakers is understanding the best approach to constructing flexible and rigorous user agreements that regulate user-generated content and comply with the EU’s Digital Services Act and the UK’s Online Safety Act 2023.

Platforms should consider the impact of the content within the context it’s being shared, the intent of its being shared, and the implications for platform security that stem from its being shared.

ActiveFence works to flag new harmful narratives, providing the nuanced context for our partners to understand them and their reach to establish the impact of misinformation.

(Mis)Information Wars

2024 is already charting a course of global security deterioration and the expansion of armed confrontations in Europe, Asia, the Middle East, and Africa. The online presentations of these conflicts see professional actors and interested supporters engaged in information operations to shift public opinions (national and international) and strengthen or weaken military resolve.

These information wars necessitate vast quantities of persuasive content to be created and shared, some based on fact and others on fiction. Trust & Safety teams must understand these narratives to understand if they are ‘innocent rumors’ or malign attempts to influence public opinion.

Take two trends we have flagged up concerning conflicts in 2024.

1. Ukraine

Narratives surfaced alleging that Western support for Ukraine’s defense is, in reality, a ploy to “exterminate Ukraine’s army” and allow the country to be socially and economically exploited by NATO and private criminal corporations.

The narrative appears to target internal Ukrainian morale and decrease international support for financing Ukraine’s defense.

2. Middle-East

Pro-Kremilin accounts promote narratives that claim that the October 7th attacks on civilians were faked. Focusing on populations that have already been primed with another false narrative – claims that the March 2022 Russian massacre in Bucha, Ukraine, was fabricated – these accounts promoted similar narratives about Israel.

The focus here is on weakening Israel’s casus belli by spreading doubt on the nature of the Hamas-led October 7 attack.

The mass of overlapping conflicts creates significant issues for the categorization of misinformation, and this is especially true given the electoral context of 2024.

Electoral Vulnerabilities

We are undertaking an almost unparalleled democratic experiment where, for the first time, the populations of 83 countries—over 40% of UN members—will vote within the same calendar year.

These elections span the US, India, the EU 27, Mexico, Indonesia, Pakistan, South Africa, Iran, Russia, Taiwan, and South Korea.

Each national ecosystem will be under stress, with the potential for the spread of false information increasing hugely. This situation is not hypothetical. Strategic elections frequently attract misinformation narratives, where various actors (from state actors to conspiracy theorists) seek to influence the votes or prime audiences for future content.

Casting the net to the 2024 elections, we see platform threats from the recorded events in Taiwan and the US.

  • In our work around January 2024’s Taiwanese General Elections, we saw narratives echoing Russian anti-Ukrainian propaganda to harm the electoral chances of specific parties.

We saw coordinated claims promoted online alleging that the current DPP government, which the CCP is particularly hostile to, is engaged in a pro-US coup to capture Taiwan’s military and population for use against mainland China.

  • Similarly, early in the US 2024 election season, we have seen trending claims that the Republican nomination contest is a sham race to install an anti-Trump Globalist politician aligned against US interests.

For example, we identified claims that Vivek Ramaswamy, then a candidate for the Republican nomination and who was politically closest to former President Trump, was a Trojan Horse politician working to steal voters once he had falsely imprisoned former President Trump.

The trends are location manifestations of global stories that attempt to sow distrust in the election procedures that underpin our democracies. The overlap is not incidental, as concepts are taken from one national ecosystem to another, using the preexisting claims to bolster each new assertion.

Unleashing AI Technology

The events described above are not new, though their sheer volume hugely affects their risk estimation. However, what is new is the digital landscape we find ourselves in: AI was a buzzword in 2020, but today it is a reality.

Threat actors of all complexities can harness the power of Generative AI to produce effective, misleading, emotive content that can go viral.

We see this in the Russia/Ukraine war, where, in 2022, information actors managed to fabricate and distribute a deepfake video of President Zelensky on Ukraine TV, with the announcement, “There is no tomorrow, at least not for me. Now I have to make another difficult decision: To say goodbye to you. I advise you to lay down your arms and return to your families. It is not worth dying in this war.”

In addition to forgeries, content is used to spread emotive, misleading political messaging. This material’s creation has been democratized and played an important role in Argentina’s 2023 election, where candidates attacked one another and spoke to their base using these technologies.

We see the IRGC-backed Ansar Allah Houthi movement and their allies using these same tools effectively to legitimize their attacks on Western-linked shipping in the Red Sea and incite others to act around the world. In the incoming high-risk elections, the consequences could be severe.

Tackling The Misinformation Threat

Trust & Safety fears about misinformation spreading on their platforms are well founded. The EU has already used the powers afforded by the DSA to launch legal action against X (formerly Twitter), because it hosted illegal content and disinformation surrounding the Israel-Hamas war. If found non-compliant, the company faces a $264M fine (6% of annual global turnover). The legal liability of platforms hosting misinformation is not only present in the EU; the UK’s Online Safety Act 2023 has also created new crimes surrounding the spread of false information online, and strict action obligations exist in countries such as Singapore, India, and Brazil.

To comply with statutory requirements and secure user safety is a multi-pronged intelligence-backed safeguarding detection system.

  1. Access insights for your teams to understand the misinformation trending online. This will help you educate your moderators to evaluate flagged content and facilitate efficient and accurate decision-making processes.
  2. Access keywords and catalogs of malicious metadata (audio, image, text) associated with trending misinformation. This will help in the automated detection of harmful content being shared on your platform or implementing blocks on harmful content generation from your AI services.
  3. Access deep investigations into those threat actors spreading the material to understand if coordinated threat actors have compromised your services to spread misleading information.

 

Table of Contents

Learn how to handle the misinformation challenge in an election year:

Election Integrity