In the third edition of the ActiveFence Policy Series, we take a look at the health and electoral disinformation policies of major tech companies and examine the core components. In this blog, we will focus on health disinformation policy.
Download the complete report to learn how industry leaders protect the public from consuming harmful information online.
In 2021, health care authorities are no longer the “go-to” for medical information. The internet is now the first place we turn. This, of course, leads to a climate where false health information is more widespread than ever before. Though the problem of health misinformation is not new, COVID-19 has brought an onslaught of harmful information online, causing unprecedented risk to public health. As the world has witnessed, medical misinformation and conspiracy theories are demonstrably capable of negatively impacting personal and public health, particularly during this pandemic. For instance, a 2020 study published by Cambridge University Press found that people who believed in one or more conspiracy theories regarding COVID-19 were less likely to follow advice to protect their health, such as hand washing or social distancing measures.
During the COVID-19 pandemic, there have been significant and concerted acts of health disinformation spread against the vaccines that were produced to mitigate the harm of the virus. Additionally, there have also been campaigns and conspiracy theories promoted and amplified directed to challenge the public health guidelines implemented by health authorities to reduce the transmission of the virus. ActiveFence investigated a number of these and discussed these disinformation campaigns in our report here.
The consequences of these campaigns have been dire. Significant portions of society are refusing to vaccinate, and alternative medicines, which can at best be useless and at worst harmful, are being regularly taken in place of regulated medicines. The figures are stark. In the US, it was reported that 98.3% of hospital admissions due to COVID-19 were among unvaccinated people in July 2021.
As the conspiracy theories surrounding COVID-19 and the vaccines created against the disease spread and change, the need for policy is clear. Platforms must either create general rules against health disinformation or frequently update their policy guides as online trends evolve to assist their moderators in identifying and counteracting the spread of this dangerous disinformation.
The global pandemic has put social sharing platforms under more pressure than ever to keep their platforms clear of disinformation. From calls to social media platforms to alter their algorithms to President Biden stating that these platforms are killing people, social sharing platforms are constantly in the spotlight.
With the pressure on, social sharing platforms must respond to the growing threat of COVID-19 disinformation. While platforms want to protect their users from misinformation, the task is not simple. While the pandemic continues apace, the deaths continue to climb, and new disinformation campaigns continue to grow and develop. This places platforms in the position of needing to facilitate dissent and debate – essential facets of democracy – while protecting users from harmful content.
Social Media Platforms and Conspiracies
In the previously mentioned study by Cambridge University Press, it was found that there is a correlation between the belief in conspiracy theories and the use of social media as a source of information about COVID-19. For example, people who believed one or more conspiracy theories were more likely to use social media as a source of information than traditional media such as newspapers or the radio. Additionally, people who used one or more social media platforms to gather information about COVID-19 were more likely to believe in a conspiracy theory.
As a result, the leading platforms have taken significant action to prevent harmful and false information from being spread via their platform. Some platforms have taken the approach of forming policy around specific theories, such as banning content that claims the vaccine implants a 5G microchip along with the COVID-19 vaccine. However, other platforms create policies that are more general, such as banning misinformation content about the efficacy and safety of COVID-19 preventative measures and treatments.
Medical misinformation has long been a problem on video sharing platforms, and it continues to pose challenges for both users and platforms alike.
The COVID-19 pandemic has exacerbated this issue, pushing technology platforms to pursue new and innovative ways of combating false and misleading medical information. Video sharing platforms have developed two models for tackling this form of disinformation. The first is to generally ban claims “that may cause harm to public health,” which gives moderators significant scope for action. The second is to identify and prohibit specific instances of health disinformation clearly.
For examples of video sharing policies, read ActiveFence’s research report, which details the policies of a number of video sharing platforms.
The Ongoing Challenge
These complex and sensitive issues continue to evolve as new COVID-19 misinformation arises and online behaviors continue to change. Due to the challenge of navigating these changes, ActiveFence’s research team continues to monitor all relevant changes and developments in the trust and safety ecosystem.
Our third report in ActiveFence’s Policy Series details twenty of the biggest platforms’ disinformation policies to equip Trust and Safety teams with the information needed to protect the public.
For the comprehensive report detailing guidelines and examples of health disinformation policy, download our report here.