New York University's Institute for Impact and Intrapreneurship. Enrollment to our April 2023 session has closed, please register for updates about the next registration round.
Apply nowWe are proud to announce a first-of-its-kind academic course on Trust & Safety. Geared towards both current Trust & Safety professionals seeking to deepen their knowledge, as well as those looking to join the industry, the course offers a comprehensive overview of the fast-growing field. NYU’s Institute for Impact and Intrapreneurship and ActiveFence’s team of experts will cover the many facets of Trust & Safety, explaining its goals, challenges, work processes, and functions, highlighting how innovation drives success in Trust & Safety.
Upon completing 15 academic hours over ten 90-minute virtual classroom sessions and at the end, participants will receive an NYU Executive Education Certificate in Trust & Safety.
How did the industry of Trust & Safety develop? What role does Trust & Safety play within organizations and the larger tech ecosystem? What are the threats facing online platforms and their users across different types of platforms? These are the questions Goldberger will answer in this first introductory class. From the creation of the internet and the first innovations in safety-driven technology to the impact of online platforms on the world, she will share the history, threats, and events that have shaped Trust & Safety and where it stands today.
This two-part session will tackle the complexities of disinformation, and provide an understanding of how false narratives are created, the threat actors behind them, and how they interplay with current events.
We will discus how foreign and domestic actors use fake accounts, amplify content, and establish complex networks to manipulate public discourse. Using real-world examples such as recent elections and the war in Ukraine, we will look at how platforms quickly become hijacked to spread false narratives. In the second half of the session, we will demonstrate how emerging trends, narratives, and misinformation create risks to online platforms that come with potential offline consequences. We will also explain how narratives promoting social unrest, hate speech, conspiracy theories, and political and health misinformation can arise from significant political and cultural events.
Some of Trust & Safety’s most difficult tasks are dealing with the ugliest content on the web: building support for, creating, and disseminating content related to terrorism, extremism, hate speech, and violence. The recruitment of terrorists, promotion of white supremacy, and live streams of mass shootings are some real, on-platform examples of the manifestations of these risks. These abuses can come from organized networks or coordinated predators using sophisticated methods to avoid detection. We will dive into how threat actors enter online spaces to foment and spread these harms, and will explain their complexities and evasive techniques.
In this session, we will focus on the online threats facing children, such as grooming, sextortion, and exploitation. Sharing how the nuanced tactics of digital predators deceive detection, we will expound on how predator communities have learned to manipulate codewords to exploit both the dark web and popular platforms.
After gaining a clear understanding of the online threat landscape, we will dive into the inner workings of Trust & Safety teams. We will start with the most essential component - detecting harmful content. Nisman will break down how the tools used by Trust & Safety teams allow for the proactive detection of threat actors and how employing tactics such as abuse analysis, linguistic recording, and network tracking can help teams prevent the spread of these harms.
This class will shed light on the other side of harmful content detection: artificial intelligence. Orr will explain how technologies like machine learning models, automation, digital hashing, and risk scores, help teams scale by scanning more content, quicker, increasing the recall rate for potentially harmful content, and by extension, protecting the mental health of human moderators. Orr will also discuss the limitations of AI - from lacking visibility into the nuance and context of content to overlooking regional differences and content sentiment.
Being in a new industry, Trust & Safety teams are tasked not only with protecting platforms and their users, but learning on the fly about how to best prepare, create, and maintain these digital spaces. This session will review the lifecycle of Trust & Safety teams, including safety by design, triaging risks, measuring success, and releasing transparency reports. It will provide students with an understanding the different components necessary to make and keep platforms healthy from the beginning, as well as how to maintain trust in the public sphere.
In the past few years, the industry has witnessed a surge in countries worldwide regulating online platforms. Legislation like the UK Online Safety Bill, the EU’s Digital Services Act, and California’s Age-Appropriate Design Code Act have impacted how online platforms are regulated. This session will review these laws in addition to Section 230, current court cases on platform liability, and the future of internet law.
Platform policies determine what is allowed and not allowed and also communicate to users and the company how policy violations will be handled. The policies themselves protect the safety of users and of the platform. How those policies are constructed, communicated and enforced defines and maintains (or loses) the trust that users and the community have in the platform. Workable platform policy for content moderation aims to find the right mix between what the platform must do—defined by the expectations of regulation—and what the platform should do based on its own understanding of what is right, ethical, moral, or necessary for the business.
Exploring these complex and evolving topics, this session will provide a thorough understanding of the regulations that shape the industry and the considerations important in defining platform policy that meets regulatory obligations while also building trust with users.
At its core, the Trust & Safety industry is like an endless game of whack-a-mole, trying to keep up with new threats and tactics through technological innovation. As the industry matures, so do its challenges and the need to apply creativity to solve them, for example: How can we detect harm in encrypted content we cannot see? How can teams prevent abuse in the metaverse when the metaverse is still just an evolving concept? Trust & Safety which is not constantly finding new solutions to old and new problems, will soon find itself left behind, with flourishing abuse. For that reason, adopting an innovative mindset is critical for Trust & Safety professionals to prepare for the future. In this final session, we will walk through the principles of adaptive innovation in disruptive times.
Apply to join the Trust & Safety course