Trust & Safety in the Metaverse: ActiveFence Experts Weigh In

By
October 25, 2022
Metaverse

The metaverse is a new animal; unlike the digital platforms we’ve become accustomed to, it offers a new type of online experience that rallies around virtual community building and provides a sort of digitized iteration of reality. There are a lot of elements that platforms need to consider when creating digitally safe and secure online spaces in the metaverse. Recently, three ActiveFence experts weighed in on what some of those key considerations are. What you’ll find below is a transcript of their candid conversation about some of the more pressing issues regarding Trust & Safety in the metaverse.

Amit Dar, Matar Haller, Tomer Poran

Q: First off, what is the metaverse? And if it’s something that hasn’t been completely defined or formed yet, why do we need to have Trust & Safety regulations and policies for it?

Amit Dar, Senior Director of Corporate Development and Strategy:
It has almost infinite definitions. It’s anything from Oculus VR goggles to even something as simple as AirPods, where you’re wearing a piece of technology and connecting to a digital world. For Trust & Safety teams, it’s any place that acts as a means to connect humans, whether it’s in a game or via a business experience, or even video chat. It uses new technologies that blend the online and offline worlds, but create new risks and harms.

Tomer Poran, VP Solution Strategy:
The metaverse is more of a journey than a destination. It’s a transition in which we’re digitizing experiences that previously only existed in the physical world. That being said, it’s not yet fully formed, but it’s still really important to have policies in place. In the early days of Web 2.0, there was this mindset that since there weren’t so many people generating content on the internet, it wasn’t necessary to regulate it. The thought process was to grow it first, and then deal with safety later on. Cut to now, we seen platforms that are absolutely swarmed with harms. When we enter Web 3.0 and the metaverse, not having guardrails in place or safety by design in mind will lead us to repeat history.

Q: Why does the metaverse need its own Trust & Safety measures? Aren’t the existing laws and regulations we have now enough?

Amit: With every new technology, there are unknown harms and new manifestations of harms that crop up, and it takes a while for the government to catch up with the speed and the reach of technologies. The fact that we haven’t yet reached critical mass means it’s the optimal time to expedite regulatory processes for the metaverse. Being proactive now means that by the time the metaverse is at critical mass, we’ll already have safeguards in place.

Tomer: There are real-world laws, and then in the online world, there’s what the industry calls ‘lawful but awful.’ These are things that are legal in the real world, but their effects online can be vast and dangerous. For example, it’s legal to say that Covid-19 is fake and that the vaccines are implanting chips in our brains. But when you take that message to social media, where sources can be falsified and fake accounts and unwitting users can amplify it, it has the potential to do a lot more harm.

Q: There’s definitely common agreement about the sort of ‘red lines’ that exist – CSAM, for example – but you’ve got the issues of misinformation and disinformation, which are gray areas in terms of moderation. What needs to be different with regard to how those types of issues are handled in the metaverse?

Tomer: The key is transparency around what policies a platform has in place and how they’re being enforced. Platforms should invest in early trend detection, employ fact-checkers and do everything they can to minimize the spread of misinformation. It’s on watchdog groups, government agencies, the media and the public to make sure that platforms are making the best effort to enforce their policies.

Matar Haller PhD, VP Data:
Turns out that even with things that we think are clear red lines, there is still a lot of room for interpretation. For example, is a video of a baby’s first bath CSAM? Does it depend on where it is shared? Or does it become CSAM based on the comments it receives? In the metaverse, this becomes even more complicated since there’s crossover between red lines and gray area, even when it comes to a seemingly clear-cut issue, like CSAM. With misinformation and disinformation, the situation is even murkier. Needless to say, from a data perspective, misinformation and disinformation represent a rapidly changing landscape even in Web 2.0. The metaverse is a living and breathing thing, so not only is the rate of change faster, but the manifestations of misinformation and disinformation are much richer.

Q: Something that’s been heralded in the metaverse is this idea that, unlike in other iterations of the web, users have more power in defining privacy for themselves. Do you think that it helps or harms it?

Matar: It’s really all about balance. The metaverse gives you the ability to leave your ‘regular self’ aside if you want to. You can be as anonymous as you want to be, or as transparent as you want to be. This is good for individual privacy, but when it comes to moderation, it’s a challenge for platforms. It really comes down to the question of who needs to know who you are, and to what extent they need to know. Trust & Safety teams still need to be able to keep users safe while still offering that level of transparency and choice regarding privacy.

Q: The metaverse has this sort of Wild West element to it because it’s new and doesn’t exist unilaterally. Can you give an example of some instances of Trust & Safety issues unique to the metaverse that would raise alarms?

Tomer: The gaming space in the metaverse is pretty vulnerable to harm. You’ve got user-generated games – which aren’t new – but on a much wider scale. Users can upload not just games, but reenactments of real-life situations, like shootings. It’s this different, dangerous level of exposure. You’ve also got the issue of user-generated spaces that are being filled with malicious content that moderators can’t get into. It used to be that moderators who had access to a forum or a group through a link could get inside, see what was going on, and shut it down as necessary. Now, users can create their own spaces with invite lists, so even if you’ve got a link or an access code, you won’t be allowed inside unless you’re on the list. That means there are entire worlds where platforms can’t access or regulate, and that really poses a risk.

Matar: Unlike Web 2.0, which is more 2D, the metaverse is more 3D, which means there are vastly more ways to hide content. Nowadays, we can analyze videos and images, scanning for problematic aspects. We know what to look for and how, but since the metaverse is multi-layered, it’s more complex. For example, in a space in the metaverse, you can zoom in and zoom out to catch things in greater detail. You might zoom out of a space to see that the chairs are arranged in a swastika, or zoom in to see that the woodgrain of those chairs has swastikas on them, or turn the chairs over to reveal something more. Simply scanning, then, isn’t enough. It’s difficult to moderate, but it just means that platforms need to change their approach. The old methods won’t work in this new virtual world. Being proactive and knowing where things are coming from will help us know where to look.

–

As the metaverse continues to take shape, gain traction, popularity and users, and becomes ubiquitous with internet usage, Trust & Safety teams will need to take into account a variety of considerations to ensure safe and secure digital spaces for their users. This panel offers just a cursory overview of these. ActiveFence advocates a content moderation strategy that’s applicable to both Web 2.0 and its further iterations. Combining AI-powered content detection that’s informed by subject matter intelligence will give Trust & Safety teams the edge they need to prevent harms on their platforms.

Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.

Table of Contents