

UBC students and faculty attended the workshop at the Liu Institute for Global Issues as graduate students from the University of Münster joined online.
The information environment is awash with misinformation, deception and hostility, and AI tools may make the situation worse. How can journalists respond to these challenges?
On November 21, 2023, the Centre for the Study of Democratic Institutions (CSDI) hosted a journalism, disinformation, and AI workshop to tackle those issues. The event brought together journalism and communication graduate students from UBC and the University of Münster, along with experts from journalism, academia, and civil society. The Canada-Germany collaboration was supported by the Consulate General of Germany in Vancouver, with assistance from UBC’s School of Journalism, Writing, and Media, and the Center for Computational Social Science.
The workshop’s two sessions addressed the pressing questions:
- What types of misleading and manipulative information campaigns do journalists face, and how do these shape journalists’ practices?
- As generative AI becomes more sophisticated, what risks and opportunities does it pose to journalists?
Introducing the workshop, UBC journalism professor Kathryn Gretsinger noted that in an era of divisive and often uncivil public debates, it was particularly important to have constructive, cross-cultural discussions on these complex issues.
Navigating disinformation and dark participation


Chris Tenove (top right), Julia Smirnova (top left), and Thorsten Quandt (bottom left).
The first session, “Disinformation and Journalism,” was moderated by CSDI’s interim director, Chris Tenove. He began by highlighting the alarming frequency with which journalists around the world are themselves the target of disinformation, drawing on his recent report for UBC’s Global Reporting Centre, Not Just Words: How Reputational Attacks Harm Journalists and Undermine Press Freedom. The efforts to discredit and harass journalists can expose them to violence and harm to mental health, as well as make their work more difficult. Tenove emphasized the need for proactive and collective strategies to support journalists under attack.
Julia Smirnova, Senior Analyst at the Institute for Strategic Dialogue (ISD), further illuminated the disinformation landscape that journalists face. Referencing her report, “Capitalising on Crisis: Russia, China, and Iran Use X to Exploit Israel-Hamas Information Chaos” (ISD, Oct 2023), she explained how state actors conduct disinformation campaigns during global crises to achieve their political aims. Not only must journalists seek to counteract these manipulative narratives, but news organizations themselves are also targeted. She described research by ISD on Russia-affiliated efforts to spread false information via “doppelgänger” sites that impersonated legitimate news sources. (Recent research by the Insikt Group suggests these doppelgänger sites are partly created using generative AI.) Smirnova revealed the geopolitical context for disinformation campaigns that journalists may encounter.
Professor Thorsten Quandt from the University of Münster discussed how journalists may face forms of ‘dark participation’ in response to their work, including misinformation, trolling, cyberbullying, and strategic manipulation. He emphasized how these behaviours have contributed to a toxic online environment. Quandt further argued that there have been major shifts in our understanding of the relationship between digital media and journalism over the last 25 years. The initial niche phase in the 1990s, characterized by experimental digital journalism; the euphoria phase in the 2000s, marked by optimism about the potential of citizen participation; the disillusionment phase in the late 2000s; and the current doom and gloom phase beginning in the mid-2010s, dominated by concerns over misinformation, surveillance, and democratic erosion. Quandt argued that it is important to also focus on positive developments in journalism, though he admitted “I know this sounds a bit strange coming from somebody who invented the term ‘dark participation.’”
Journalism’s response to AI-driven disinformation


Chris Tenove (top left), Counterclockwise: Cathryn Grothe, Stephen Marche, and Matt Frehner.
The second session, also moderated by CSDI’s Chris Tenove, focused on the potential uses and misuses of generative AI in today’s information systems and the impacts that the technology could have on journalism ethics and jobs.
Cathryn Grothe, a research analyst at Freedom House, highlighted the repressive uses of AI technologies, as detailed in her organization’s 2023 Freedom on the Net report. She noted how affordable and accessible AI has made it easier for various actors, including governments, to engage in disinformation campaigns. Grothe also emphasized the potential for well-designed AI tools to support human rights, such as by helping people bypass government censorship and document human rights abuses.
Matt Frehner, the head of visual journalism at The Globe and Mail, discussed his memo to the newsroom providing guidelines on using AI in their work. This includes being vigilant about intentionally or accidentally publishing deceptive images that were created or modified by AI tools. Publishing images like that could “call into question everything we publish from then on.” Frehner noted that while AI could replace some sports, business, and recipe article writing, journalists can productively use AI technologies to develop investigative work, connect with sources, analyze data, and assist in the writing process.
Stephen Marche, a Toronto-based journalist, novelist and creator of The Death of an Author, provided insights based on years of experimentation with AI. To develop creative work, he said he uses multiple AI models and employs very specific syntactic instructions to achieve different literary effects. However, he noted that he doesn’t use these tools for journalism, because the technology cannot substitute on-the-ground observation and original perspectives. In an AI era, “the writing that is going to matter is the writing that you can tell comes from a person.” Marche agreed that AI may supercharge the spread of disinformation, but he also noted social media platforms already do so, primarily because many people actively spread false claims. “It’s very easy to blame technology,” he said, “but actually, the problem might just be people.”
Based on the workshop, it’s clear that collaborative efforts will be needed to address the multiple challenges to the information environment, which are driven by geopolitical conflicts, repressive governments, and ideologically motivated actors, as well as new technological developments. It’s undoubtedly a complicated era, but also one in which journalists can play critical roles.
CSDI will be publishing a short report in early 2024, building on the workshop’s insights. To receive it, stay tuned via our newsletter or social media channels.
Further resources
- A note on AI and The Globe and Mail newsroom, a memo to journalists written by Matt Frehner
- Capitalising on Crisis: Russia, China, and Iran Use X to Exploit Israel-Hamas Information Chaos, a report by the Institute for Strategic Dialogue
- Facts, Fakes and Figures: How AI is influencing journalism, a report by the Goethe Institut
- Freedom on the Net 2023: The Repressive Power of Artificial Intelligence, a report by Freedom House with contributions from Cathryn Grothe
- Not Just Words: Reputational Attacks against Journalists, a report co-authored by Chris Tenove
- The Paris Charter on AI and Journalism, a set of principles to guide journalists’ use of AI, created by an international commission chaired by Nobel Peace Prize laureate Maria Ressa
- Peering Into the Future of Novels, With Trained Machines Ready, featuring Stephen Marche