Online Hate During the Pandemic

Authors: Dr. Chris Tenove and Dr. Heidi Tworek

This paper was funded by a grant from British Columbia’s Office of the Human Rights Commissioner (BCOHRC), which holds the copyright. The conclusions in this paper do not necessarily reflect the views of B.C.’s Human Rights Commissioner.

Since the COVID-19 pandemic reached British Columbia in January 2020, there have been reports of online hate speech as well as offline hate incidents. This report aims to support the information-gathering and development of policy recommendations as part of BC’s Office of the Human Rights Commissioner’s inquiry into hate in the pandemic through four contributions.

  1. We identify functions and forms of online hate that should be understood and addressed.
  2. We summarize key research findings on online hate in Canada, and we suggest pandemic-related factors that may have exacerbated online hate.
  3. We summarize ongoing research projects on online abuse of health communicators (conducted by our team at UBC) and online hate and counter-speech (conducted by our colleagues at Simon Fraser University), which are described more fully in the case studies.
  4. We identify key actions that may be taken to address online hate, drawing on existing or proposed policies for governments, technology companies and civil society.

Alongside drawing on original research projects at UBC and SFU, this report brings together scholarship from communications and media studies, political science, criminology and history; policy reports by federal standing committees and international organizations; as well as research by civil society organizations and journalists.

We hope that this report will help individuals and organizations in B.C., including the Human Rights Commissioner, to better understand and address the complex online dimensions that form part of broader problems of hate.

The data in this report includes disturbing language and points to trends of online abuse and hate during the pandemic in British Columbia. We recognize this information will be deeply disturbing for many people in our province to hear. This issue, while critical to examine, is extremely challenging, especially for people who have experienced or witnessed instances of online hate and toxicity. British Columbians who experience distress at reading this report or who need immediate help can access a list of crisis lines and emergency mental health supports we have compiled on our website at: bchumanrights.ca/support.


Case Study A: Hate and harassment targeting health communicators

Authors: Dr. Heidi Tworek and Dr. Chris Tenove
Researchers: Wilson Dargbeh, Hanna Hett, and Oliver Zhang

This case study emerged from a larger project investigating online abuse of health communicators in Canada, funded by a Social Sciences and Humanities Research Council Partnership Engage Grant, no. 892-2021-1100.

  • During the pandemic, public health officials, medical practitioners and health experts engaged in unprecedented levels of public communication, including online.
  • As the pandemic continued, they faced escalating levels of online abuse, often linked to waves of infections, vaccine mandates and other public health measures, and broader political conflicts.
  • Key themes of abuse include accusations of corruption and incompetence, responsibility for widespread injury, and loss of liberties. Health communicators face abuse from individuals who consider public health measures to be too extensive, but also from those who consider the measures to be insufficient.
  • Explicit racism, xenophobia and misogyny figure in a small but disturbing proportion of messages. More common are messages that seek to undermine the authority of women or racialized health communicators.
  • Online abuse and hate affect the safety and well-being of health communicators, as well as their ability to effectively promote health-related information.
  • Abuse toward health communicators, but also toward the vaccine hesitant and other groups, is intertwined with broader patterns of polarization and toxicity online.
  • Health communicators require support from employers and other institutions to help them manage online abuse and hate, in addition to more consistent action from social media platforms and law enforcement.


Case Study B: Hate and the COVID-19 pandemic—An analysis of B.C. Twitter discourse

Authors: Matt Canute, Hannah Holtzclaw, Alberto Lusoli and Wendy Hui Kyong Chun (Digital Democracies Institute, Simon Fraser University)

This case study emerged from a larger project at the Digital Democracies Institute. The Institute’s From Hate to Agonism Project, funded by a UK-Canada Responsible Artificial Intelligence grant, is developing innovative and responsible machine learning approaches to support health democratic discourse online.

During the pandemic we saw an increase in tweets classified under the anti-Asian hate topic:

  • Natural language processing (NLP) text-model results showed an increase in hate speech in March 2020, when B.C. declared a provincial state of emergency.
  • The increase in hate speech was accompanied by an even larger increase in tweets classified as counterspeech. This finding is meaningful as it shows how the proliferation of hateful and harmful speech triggered an oppositional, and larger, response. However, reactionary counterspeech developing within highly toxic environments can further polarization rather than contribute to constructive dialogue over differences and conflict.
  • The conversation about anti-Asian hate in B.C. was highly susceptible to events taking place outside of the province and country, particularly events in the U.S. Specifically, we saw a dramatic increase of tweets classified as counterspeech in the wake of the tragic Atlanta, GA, spa shooting in 2021, as well as an increase in tweets attacking specific identities when notable and contentious events occurred in the United States (e.g., George Floyd murder, U.S. Capitol riot).
  • Data also show an increase in toxicity in general conversations about COVID-19 in B.C. and government management of the crisis (COVID-19 topic). Tweets within this topic expressed frustrations directed towards restrictions and vaccine mandates, political leaders and health officials, as well as individuals defying lockdown orders or public health order restrictions such as wearing a mask.
  • The effectiveness of text models decreased when these were applied to novel contexts (e.g., Wikipedia trained model used to analyze tweets, anti-Asian trained model used to analyze COVID-19). This limitation represents a challenge for researchers as well as for social media platforms, whose algorithms similarly struggle to contextualize language use across platforms, communities, cultures and subcultures.