All Eyes On 2022: Trust & Safety Challenges for the New Year

By
December 2, 2021
Storm near the sea

By 2021 the online population had reached 4.66 billion—over 60% of the world’s inhabitants. This growth of the internet is matched by an escalation of dangerous activities online. At the same time as these platforms are developing new services, legislators are tightening requirements to tackle online abuse, leaving Trust and Safety professionals caught in a perfect storm. To thrive in 2022 and beyond, platforms must take steps to proactively identify and combat the emerging threats, which target their users.

An incoming storm 

In recent years the real-world impact of online activity has become ever-more pronounced. We have become accustomed to self-radicalized ‘lone wolves’ committing acts of terror with deadly force. These violent events form part of a dangerous accelerationist feedback loop. Motivated by extremist content, these attacks are recorded and then shared online to inspire future incidents.

It is not just ethnic or religious violence that can be traced to online activity. Online child predator communities are growing.

Trust in the mainstream news media has also been damaged, with coordinated dishonest sources proliferating online.

  • In the US trust in the media fell by 14% from 43% in 2010 to just 29% in 2020.

Users are being challenged by disinformation across the globe, unbounded by language or geography. This activity is most pronounced at times of general elections and has been severe during the COVID-19 Pandemic. These false narratives fan the flames of societal divisions and cause dramatic destabilization of democracies across the world. 

Convergence of threats

These serious threats are converging at a moment of significant technological innovation.

  • Teams will need to build systems that are as complex as those used by threat actors, to identify damaging artificial content.
    • Innovations like GPT3—a language prediction model that generates human-like text—and other programs that generate images and videos at scale will seriously test the Trust and Safety community, especially when attempting to counter disinformation and the spread of malicious deep fakes. 

Not only can private individuals now broadcast using social media platforms, but utilizing the architecture built for online gaming, they can now simulcast across platforms to huge audiences.  These innovations and the ever intertwining of platforms facilitate the interaction with larger audiences – with reduced friction. However, it also multiplies the opportunities for abuse, with repercussions for child endangerment, racial and religious extremism, and the spread of disinformation. The movement towards the metaverse expands the potential reach of harmful content and broadens the burden of liability from harm.

Key questions for online safety are raised by this rapid interconnection of platforms – an important question is liability:

If a criminal act is organized on a gaming platform and the gameplay is then simultaneously broadcast across multiple, independent streaming platforms, whose responsibility is it?  

A cross-platform approach to threat detection is the only viable solution to ensure platform integrity.

Changing Laws

These questions are more important because the internet rules established twenty-five years ago are rapidly being replaced. Section 230’s status quo is receding into history.

National legislators are taking steps to set new international internet standards, and responsibility for hosted content is shifting from the content creator to the platform.

  • In 2022 a wave of legislation is expected to reshape the responsibilities of online platforms.
  • Understanding that automated solutions for detecting original harmful content are in their infancy, legislators are attempting to drive innovation by regulation, and necessitate safety by design. 

These new laws will have consequences for online anonymity, freedom of speech and the freedom to be protected from harm.

  • If the move is successful proactive moderation approaches could be required for almost 15% of internet users worldwide.

The UK is leading the charge creating the first duty of care for online safety and is expected to pass a new law requiring platforms to find and remove new child pornography and terrorist content, as well as remove other types of harmful content such as hate speech. Canada is following suit and the EU is considering similar requirements. 

There are few online borders in user-generated content, and while regulatory innovations are occurring abroad, US companies will need to comply if they wish to access foreign markets. Proactive harmful content detection, therefore, looks to become the international expectation. This means detecting harm off-platform to protect users within.

Agility In Handling New Threats 

As the explosion of user growth and user abuse continues and legal obligations intensify, platforms must become more agile in handling the emerging threats. 

  • The current reactive approach, based on user flagging, classifiers, and content moderators is too slow and exposes companies to too much liability in the new legal context.
  • To ensure a sophisticated intelligence-based coverage for users in all geographies, Trust and Safety teams will need to employ specialists and local experts in a range of languages to detect potential sources of harm.
  • To back up human intelligence platforms will need to rely on AI automation, which can rapidly search through vast quantities of files to identify harmful content. The use of such tools will raise important privacy questions, as was seen when Apple moved to scan the iCloud for illegal material.
  • To try less invasive approach teams, have already begun to divert resources to identify potentially problematic users, based on their online history. 

Conclusion

2022 is heralded as the start of the Age of Accountability. It looks to be defined as a year of legal revolution that will cement a proactive international baseline for online safety. Trust and Safety teams must adapt quickly as the online ecosystem changes, and overlapping platform use creates multi-platform vulnerabilities.

ActiveFence works with leading platforms to help stay ahead of threats and be in compliance with legal obligations.

Table of Contents