Five Anti-LGBTQ+ Narratives Trending During Pride Month – 2024

By , ,
June 14, 2024
Pride flag waving during a vibrant city parade, celebrating LGBTQ+ Pride Month.

Safeguard your platform from potential abusers:

Talk to our experts

Pride events, which originated in 1970 after the Stonewall Riots, have unfortunately been marred by hate over the years. 

Despite some progress, the same violence, discrimination, and the spread of disinformation targeting the LGBTQ+ community persists both in physical events and in the digital world. 

Platforms of all sizes witness calls for violence, dissemination of hateful speech, and harmful content that affect users not only during Pride Month but also throughout the year. To keep users safe, Trust & Safety teams must identify and address the rising hostility surrounding Pride Month. 

Inspired by Pride Month, and based on ActiveFence’s intelligence, which scans countless online resources—from the depths of the dark web to mainstream platforms—we have created a list of the most prevalent Anti-LGBTQ+ narratives during Pride Month 2024.

By proactively detecting these narratives on-platform, Trust & Safety teams can preemptively mitigate risks and stay ahead of emerging threats, ensuring a safer online environment for all.


Five Anti-LGBTQ+ Narratives Trending During Pride Month 2024

1) Threats of Violence from Neo-Nazis Against LGBTQ+ Individuals:

The LGBTQ+ community faces persistent threats of violence from neo-Nazi groups, especially during Pride Month. This year, there has been an increase in content circulating on various user-generated content (UGC) platforms that encourages hate crimes and targets specific LGBTQ+ clubs and areas, further exacerbating fears and tensions within the community. 

The neo-Nazi online community is particularly vocal, producing numerous blunt hashtags like “#Stop-trans-agenda” and “#Stop_lgbt_propaganda” on their dedicated platforms. They also use less obvious hashtags on more general platforms, such as “#Anti-Furry,” an online slur that dehumanizes non-binary individuals by labeling them as non-human creatures. Another example is #Stolzmonat, which translates to “Pride Month” in German. The Stolzmonat campaign is a German nationalist movement that opposes Pride parades and calls for harm against participants.


2) Dissemination of Anti-LGBTQ+ AI-Generated Imagery:

Right-wing extremists are exploiting generative AI to create and spread anti-LGBTQ+ memes, images, and slogans. These AI-generated materials can be mass-produced and distributed across multiple platforms, fueling discriminatory narratives, increasing hostility, and directly encouraging harm.

An especially troubling image circulating on far-right platforms shows a vehicle leaving black marks over a rainbow flag, styled to look like a video game. This image uses the hashtag #black_lines_matter, which is intentionally similar to the Black Lives Matter (BLM) slogan. While it depicts car enthusiasts creating tire burnout marks, the combination of this slogan with the specific imagery actually originates from several neo-Nazi groups, mainly in Eastern Europe. These groups use the image to encourage and legitimize vehicular attacks during Pride parades in June.


3) Gun Violence and Anti-Trans Violence Collide:

The rise in anti-trans violence has raised concerns within the trans community about being targeted with hate crimes. Online discussions have emerged, urging trans individuals to consider arming themselves for self-defense. This discourse highlights the tension between groups opposing transgender rights and those advocating for gun ownership as a means of protection. 

The recent Nashville school shooting, allegedly carried out by a trans individual, has further intensified the discussion. Anti-LGBTQ+ movements accuse the government of withholding information to “protect” the shooter’s gender identity, fueling harmful narratives and deeper divisions.

A disturbing manifestation of this harmful discourse  is its adoption by the pro-Jihadist community, which, like neo-Nazis, has a strong presence on social media and shares a similar affinity for AI-generated “art.” One prevalent image shows security camera footage from the Nashville school shooting, with the bullets in the shooter’s rifle colored in the trans flag colors. This meme, inspired by a popular online video game, is often accompanied by captions suggesting that this is the best way to “celebrate” Pride Month, merging anti-LGBTQ+ sentiment with the glorification of violence.


4) Drag Story Hour Being Blamed for Grooming:

“Drag Story Hour” events, where drag performers read books to children in libraries, schools, and bookstores, have long been targeted by hate groups. Anti-LGBTQ+ social media posts claim these events groom children and encourage followers to report them to the police and leave negative reviews. Some users have even suggested that parents who take their children to these events should be arrested, perpetuating harmful stereotypes and misinformation.


5) Extremist Chatter Glorifying Omar Mateen:

Extremist online chatter has been glorifying Omar Mateen, the killer responsible for the Pulse nightclub shooting in 2016, one of the deadliest in American history. Pro-Jihadists and other extremist groups have adopted Mateen as a figurehead, calling for more violence against the LGBTQ+ community.

ActiveFence has identified numerous examples across multiple platforms where account names, hashtags, slogans, and memes utilize Omar Mateen’s name and image. This glorification of past atrocities not only highlights the ongoing threat of violence but also serves to incite further attacks.


Combating Hate Speech on Your Platform


World events often provide opportunities for threat actors to escalate harmful activities, particularly hate speech. For instance, during Pride Month and after major events like the US Supreme Court’s overturning of Roe v. Wade, we have witnessed increased efforts to spread hate speech.

All platforms face the challenge of combating hate speech, which impacts the safety and well-being of users and the business itself. Whether it’s anti-LGBTQ+ narratives or toxic content targeting any other community, it is crucial to swiftly remove such content. 

Our AI-powered tools, ActiveOS  and ActiveScore, are available in over 100 languages. Using these tools, teams can detect and take action against hate speech, regardless of the region, target audience, or content type. Additionally, ActiveFence’s deep threat intelligence offers proactive investigations into novel abuse tactics and narratives – helping Trust & Safety teams prepare for this type of abuse before it reaches their users. By using these tools and services, Trust & Safety  teams can better protect their users and reduce the risk of real-world hate crimes on a large scale.

Trust & Safety teams play a critical role in stopping online narratives that can incite real-world harm to the LGBTQ+ community and other marginalized groups. However, given the multitude of platforms and the sheer volume of abusive content generated by threat actors, plain moderation falls short. During these critical times, Trust & Safety teams need robust intelligence to detect and mitigate harmful activities. 

For specific on-platform findings related to these narratives or other harmful activities, our experts are available to provide tailored insights. By leveraging their expertise,  you can stay ahead of bad actors and effectively prevent online harm from impacting your platform.

Table of Contents

Safeguard your platform from potential abusers:

Talk to our experts