Why Generative AI Is The Next Frontier in Trust & Safety
The recent release of AI chatbot ChatGPT has taken the internet by storm. From getting the system to write poems, explain concepts in quirky narrative tones, churn out descriptive essays and more, it seems its use cases are infinite. But where endless possibilities appear, so do myriad opportunities for abuse, and ChatGPT is no exception.
It’s worth noting that ChatGPT isn’t an outlier but simply the latest in a string of abuse-laden generative AI tools. The 4chan chatbot was famously offensive, the Replika version was misused by men creating AI ‘girlfriends’ and then abusing them, and Microsoft’s version turned racist thanks to less-than-intelligent machine learning after just one day. AI image generators have a tendency to regularly produce imagery so offensive and racist that Craiyon, a popular one, has an explicit warning for users that it may produce images that “reinforce or exacerbate social biases” and “may contain harmful stereotypes.”
All AI tools, it seems, are inherently vulnerable machines.
Tackling the Issues of AI-Produced Violative Content
Millions of people have used AI chatbots and image generator and have done so, generally speaking, with positive intentions. However, like every digital plaything, there are loopholes, and some users have gone on a quest – maliciously and otherwise – to find out where they lie. Machine learning and artificial intelligence, as incredible as they may be, present serious challenges for Trust & Safety teams, as the engineers that create them have yet to perfect their ability to protect these technologies that use them from misuse.
Any company making an AI tool available for public use needs to have an adequately robust conduct and content moderation policy in place to attempt to reduce its misuse. Any policy needs to address not only the obvious types of violations that users might be tempted to ask a bot or image generator to produce but the workarounds they might use to get the same information. The prompt injections that users so gleefully use to get bots to give them violative information in seemingly harmless ways, for example, need to be considered in content policies as well. Asking a bot to provide instructions on how to build a bomb is an obvious no-no, but so too, should asking it to provide the same information as a re-enactment of a movie scene or via a script.
The same goes for image generators, which can be easily manipulated for abuse abusive purposes. While the use of these tools to create sexually explicit or violent imagery isn’t new, they also have the potential to create seemingly innocuous photos that can be used for malicious purposes. They can produce images that may be used to spread disinformation or support hateful tropes. It’s been said that an image is worth a thousand words, and when used incorrectly, AI-generated ones have the power to do incredible damage. As smart as artificial intelligence is, the Trust & Safety teams involved in projects producing it need to be one step ahead.
Trust & Safety in the AI Era
The concerns from the Trust & Safety industry are apparent: a tool has been made publicly available that has the ability to produce content that can be used for violative purposes. Its open access means that any individual with internet access can now get ahold of this type of content and make use of it for malicious purposes both online and off.
It’s imperative that any platform offering a public-facing AI tool have robust community guidelines and content policies in place. The risks with technology like this stretch beyond our imagination’s limits; just like 3D printers were seen as a novel technological innovation, bound to create endless interesting and helpful tools, so too, have they been used for malicious purposes, like printing gun parts used to carry out shootings. Trust & Safety teams guarding these types of models need to consider the worst possible use cases for the features their platforms offer, and implement rules that prohibit users from testing the limits. Models that produce text specifically present an even more complex problem when they’re unable to moderate incoming content: how can they moderate their own output?
The list goes on: for an AI chatbot that understands multiple languages, Trust & Safety teams will need to consider the potential linguistic idiosyncrasies and the context behind them to be able to decipher what’s allowed and what’s not. For products that can be used to inflict offline harm, the case of who’s ultimately accountable needs to be parsed out. Can an AI tool or its creators be held liable for inadvertently providing harmful information to an individual seeking out some sort of attack or illegal operation?
Trust & Safety teams on platforms across the digital world will need to consider the full spectrum of effects of not just the tech itself, but the content it can produce. These concerns are distinct from those surrounding typical UGC platforms, where the lines between host and user are clear, and users are less able to manipulate a tool to their own design. As AI tools become more and more ubiquitous, it becomes increasingly clear that they present a new frontier in terms of Trust & Safety, moderation, and law.
Like other types of user-facing, interactive technology, Trust & Safety solutions will require two main elements: agility and intelligence. Product teams need to be able to make adequate adjustments on the fly to repair new and apparent weaknesses, and policy teams need to be in the know about off-platform goings-on in order to be able to prevent the spread of malicious activity on platforms themselves. ActiveFence’s full-stack solution affords platforms both of these key features, granting Trust & Safety teams access to a constantly updated feed of intelligence which is in turn used to train our AI. At this point in society’s technological history, it’s clear that machine learning and AI will make up a significant chunk of future innovations; all that’s left now to do is make sure they’re as secure as they can be.