Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Deploy generative AI applications and agents in a safe, secure, and scalable way with guardrails.
A false claim by one disinformation journalist spiraled into a campaign of disinformation, spreading to state officials and mainstream media. Here, ActiveFence presents the story of the “US-backed biowarfare laboratories” that went viral.
In the years leading up to Russia’s invasion of Ukraine, affiliated and unaffiliated state actors have been fueling their disinformation machine, laying the groundwork for what we are now witnessing. As the war intensifies, Trust & and Safety teams, content moderators, fact-checkers, and the like are fighting against a massive influx of disinformation, cyber warfare, and propaganda.
ActiveFence has been monitoring disinformation campaigns, tracking their sources, and understanding how trends disseminate. From official state media actors to pro-Russian individuals, journalists or groups, disinformation is spreading, trying to weaken Ukraine and legitimize the war. As the battle on Ukrainian soil rages, so does the battle to protect the truth online.
In this blog, we share how one disinformation actor twisted a seed of truth to start a series of lies, with the narrative spreading to mainstream social media platforms as well as to the mouths of Russian and Chinese state officials.
As with most disinformation narratives, this trend has a grain of truth, making it easier for viewers to believe. In this case, a collaboration between Ukraine and the US has been distorted to explain the reasoning for Russia’s attack.
A program of the US State Department, The Biological Threat Reduction Program, collaborates with Ukrainian laboratories to “to counter the threat of outbreaks (deliberate, accidental, or natural) of the world’s most dangerous infectious diseases. ” However, there are no US-run biological weapons labs operating in Ukraine. Despite this, these labs are a frequent target of conspiracy theories, claiming that they are US-run biological warfare projects.
Dilyana Gaytandzhieva is a Bulgarian pro-Russia disinformation actor who is active both on her own website, Armswatch.com, and other pro-Russian outlets. An independent journalist and Middle East correspondent, she published many erroneous reports on weapons supplies to terrorists in Syria and Iraq. Since January 2022, she has been the source of many false narratives against Ukraine.
Dilyana publishes an article claiming that the US government is developing bioweapons in Eastern Europe and in Caucasia, primarily Georgia and Ukraine. Using the real, existing labs of the Biological Threat Reduction Program, Dilyana spun this truth into lies around their purpose.
Since its original publication, this narrative developed over time, spreading to both mainstream and non-mainstream social platforms.
“The US embassy in Ukraine has been caught scrubbing evidence of the existence of biolabs in Ukraine while mainstream media and fact checkers have begun telling the masses that the biolabs don’t exist.”
The trend has grown tremendously in popularity, with hashtags such as #usbiolabs, #usbiolabsinukraine, #nocoincidence, #khazarianmafia and others. ActiveFence has witnessed this trend enter more mainstream media and conversations, with the narrative evolving daily. Spurring lies, this narrative legitimizes Russia’s invasion of Ukraine and paints Ukraine as a tool of the US.
A single disinformation actor has the power to promote an influential storyline, originating from just an ounce of truth. Drawing on previous false narratives, Dilyana was able to promote the narrative of the US biowarfare laboratories at the right time, pulling in so-called facts and simply scattered them across a map of Ukraine. Spreading to other influencers worldwide and sowing fear, the harmful narrative reached some of the largest social media platforms, instant messaging platforms, forums, US media outlets, and, eventually, to the Russian government itself.
During wartime, the public is far more susceptible to believing false narratives and disinformation. Threat actors take advantage of the uncertainty of war to spread disinformation far and wide across the web, including mainstream platforms of all sizes. In order to effectively contain the threat of disinformation, online platforms must act proactively. With a deeper understanding of threat actors, mechanisms, narratives, and tactics used to spread disinformation, platforms can proactively monitor actors and identify emerging trends as they arise, ensuring that their platforms are safeguarded from becoming a weapon in the current Ukrainian conflict, or during other geopolitical events.
See why AI safety teams must apply rigorous testing and training with diverse organic and synthetic datasets.
Discover principles followed by the most effective red teaming frameworks.