Protect your AI applications and agents from attacks, fakes, unauthorized access, and malicious data inputs.
Control your GenAI applications and agents and assure their alignment with their business purpose.
Proactively test GenAI models, agents, and applications before attackers or users do
The only real-time multi-language multimodality technology to ensure your brand safety and alignment with your GenAI applications.
Ensure your app is compliant with changing regulations around the world across industries.
Proactively identify vulnerabilities through red teaming to produce safe, secure, and reliable models.
Detect and prevent malicious prompts, misuse, and data leaks to ensure your conversational AI remains safe, compliant, and trustworthy.
Protect critical AI-powered applications from adversarial attacks, unauthorized access, and model exploitation across environments.
Provide enterprise-wide AI security and governance, enabling teams to innovate safely while meeting internal risk standards.
Safeguard user-facing AI products by blocking harmful content, preserving brand reputation, and maintaining policy compliance.
Secure autonomous agents against malicious instructions, data exfiltration, and regulatory violations across industries.
Ensure hosted AI services are protected from emerging threats, maintaining secure, reliable, and trusted deployments.
A false claim by one disinformation journalist spiraled into a campaign of disinformation, spreading to state officials and mainstream media. Here, ActiveFence presents the story of the โUS-backed biowarfare laboratoriesโ that went viral.
In the years leading up to Russiaโs invasion of Ukraine, affiliated and unaffiliated state actors have been fueling their disinformation machine, laying the groundwork for what we are now witnessing. As the war intensifies, Trust & and Safety teams, content moderators, fact-checkers, and the like are fighting against a massive influx of disinformation, cyber warfare, and propaganda.
ActiveFence has been monitoring disinformation campaigns, tracking their sources, and understanding how trends disseminate. From official state media actors to pro-Russian individuals, journalists or groups, disinformation is spreading, trying to weaken Ukraine and legitimize the war. ย As the battle on Ukrainian soil rages, so does the battle to protect the truth online.ย
In this blog, we share how one disinformation actor twisted a seed of truth to start a series of lies, with the narrative spreading to mainstream social media platforms as well as to the mouths of Russian and Chinese state officials.
As with most disinformation narratives, this trend has a grain of truth, making it easier for viewers to believe. In this case, a collaboration between Ukraine and the US has been distorted to explain the reasoning for Russiaโs attack.ย
A program of the US State Department, The Biological Threat Reduction Program, collaborates with Ukrainian laboratories to โto counter the threat of outbreaks (deliberate, accidental, or natural) of the worldโs most dangerous infectious diseases.โฏโ However,ย there are no US-run biological weapons labs operating in Ukraine. Despite this, these labs are a frequent target of conspiracy theories, claiming that they are US-run biological warfare projects.
Dilyana Gaytandzhieva is a Bulgarian pro-Russia disinformation actor who is active both on her own website, Armswatch.com, and other pro-Russian outlets. An independent journalist and Middle East correspondent, she published many erroneous reports on weapons supplies to terrorists in Syria and Iraq. Since January 2022, she has been the source of many false narratives against Ukraine.
Dilyana publishes an article claiming that the US government is developing bioweapons in Eastern Europe and in Caucasia, primarily Georgia and Ukraine. Using the real, existing labs of the Biological Threat Reduction Program, Dilyana spun this truth into lies around their purpose.ย
Since its original publication, this narrative developed over time, spreading to both mainstream and non-mainstream social platforms.
โThe US embassy in Ukraine has been caught scrubbing evidence of the existence of biolabs in Ukraine while mainstream media and fact checkers have begun telling the masses that the biolabs donโt exist.โ
The trend has grown tremendously in popularity, with hashtags such as #usbiolabs, #usbiolabsinukraine, #nocoincidence, #khazarianmafia and others. ActiveFence has witnessed this trend enter more mainstream media and conversations, with the narrative evolving daily.ย Spurring lies, this narrative legitimizes Russiaโs invasion of Ukraine and paints Ukraine as a tool of the US.ย
A single disinformation actor has the power to promote an influential storyline, originating from just an ounce of truth. Drawing on previous false narratives, Dilyana was able to promote the narrative of the US biowarfare laboratories at the right time, pulling in so-called facts and simply scattered them across a map of Ukraine. Spreading to other influencers worldwide and sowing fear, the harmful narrative reached some of the largest social media platforms, instant messaging platforms, forums, US media outlets, and, eventually, to the Russian government itself.ย
During wartime, the public is far more susceptible to believing false narratives and disinformation. Threat actors take advantage of the uncertainty of war to spread disinformation far and wide across the web, including mainstream platforms of all sizes. In order to effectively contain the threat of disinformation, online platforms must act proactively. With a deeper understanding of threat actors, mechanisms, narratives, and tactics used to spread disinformation, platforms can proactively monitor actors and identify emerging trends as they arise, ensuring that their platforms are safeguarded from becoming a weapon in the current Ukrainian conflict, or during other geopolitical events.
AI red teaming is the new discipline every product team needs. Learn how to uncover vulnerabilities, embed safety into workflows, and build resilient AI systems.
Discover how emotional support chatbots enable eating disorders and overdose risks, and what AI teams can do to safeguard users.
Align AI safety policies with the OWASP Top Ten to prevent misuse, secure data, and protect your systems from emerging LLM threats.