Roundtable on Cumulatively Harmful Speech
THE PROBLEM OF CUMULATIVELY HARMFUL SPEECH
Wednesday, November 19
To be held on Zoom
Start 10:30 am ET / 3:30 pm GMT
End 12:45 pm ET / 5:45 pm GMT
The purpose of this roundtable is to discuss what social media platforms should do about cumulatively harmful speech, defined broadly as speech that becomes harmful when aggregated and algorithmically amplified. There is substantial disagreement on how to conceptualize this category, and how to determine empirically what speech belongs to it. There is also substantial disagreement on the appropriate remedies, such as demotion, and whether such remedies raise free speech concerns.
This event will bring together a small number of social scientists, lawyers, philosophers, Oversight Board members and staff, and industry professionals, with expertise across a range of content areas (from hate speech to incitement to self-harm content to health and electoral misinformation). The first hour will grapple with the conceptual and empirical issues; after a short break, we will spend the second hour discussing the normative questions about how platforms should respond.
Regulating Social Media and the Future of Public Health
Ricki-Lee Gerbrandt will speak on a panel alongside Jeff Modisett (Fulbright Fellow and Honorary Professor of Practice at UCL) and Judith van Erp (Professor of Regulatory Governance at the University of Utrecht) chaired by Colin Provost (UCL Department of Political Science).
In recent years, the detrimental health effects of social media have been undeniably exposed, particularly in the case of young people. Algorithms employed by social media platforms to keep people engaged with their devices raise questions of whether such algorithms foster addiction. Moreover, an abundance of evidence has demonstrated that the online consumption patterns can lead to suicidal thoughts and other mental health problems.
Regulators around the world have taken a variety of approaches towards dealing with these problems. Legislation, lawsuits, and reputation-based “naming and shaming” tactics, have all been utilised to combat the public health effects of social media. In this event, we explore these different approaches and attempt to discern what impact they have had thus far, both separately and jointly, for the important question of how to regulate social media for public health.
Information on the event can be found here: https://www.ucl.ac.uk/social-historical-sciences/events/2025/oct/regulating-social-media-and-future-public-health
Lab present works at Trust & Safety Research Conference at Stanford University
Ricki-Lee Gerbrandt will present the paper (co-authored with Jeff Howard) “Should Social Media Platforms Permit Violating Content that is ‘Newsworthy’?
You can find the conference proceedings here: https://conferences.law.stanford.edu/tsrc/
You can find the paper here: https://tsjournal.org/index.php/jots/article/view/253
Ricki-Lee Gerbrandt joins the Slapps Research Group
The SLAPPs Research Group is an independent, international platform committed to advancing balanced, evidence-based research on strategic litigation against public participation (SLAPPs) and anti-SLAPP reform.
The Group brings together scholars, practitioners, and advocates from a range of jurisdictions and disciplines to share insights, foster collaboration, and deepen understanding of SLAPPs and their broader implications for media freedom, public discourse, and democratic accountability. More information can be found here: https://www.theslappsresearchgroup.org/
Ricki-Lee Gerbrandt presented new paper ‘Should Social Media Platforms Permit Violating Content that is ‘Newsworthy’?’, co-authored with Jeffrey Howard, at the University of Cambridge, Faculty of Law
Ricki-Lee Gerbrandt presented ‘Should Social Media Platforms Permit Violating Content that is ‘Newsworthy’?’, a new paper co-authored with Jeffrey Howard at the Centre for Information & Intellectual Property Law at the University of Cambridge, Faculty of Law. Their new paper is available at SSRN here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5364565
Ricki-Lee Gerbrandt attended the LSE Symposium on the Future of SLAPPs Research
Ricki-Lee Gerbrandt spoke at the symposium on the future of SLAPPs (Strategic Lawsuits Against Public Participation) at the London School of Economics & Political Science. Event details here: https://inforrm.org/2025/02/20/event-symposium-on-the-future-of-slapps-research/
UCL Laws event on the Online Safety Act
Jeff Howard and Ricki-Lee Gerbrandt speak at a workshop on the Online Safety Act organized by the UCL Faculty of Laws. The programme is available here: https://www.ucl.ac.uk/laws/events/2025/jun/online-safety-act-workshop?mc_cid=51976585b3&mc_eid=7ba725419a
Ricki-Lee Gerbrandt spoke at the Bonavero Institute of Human Rights, University of Oxford on populism and attacks on press freedom
Ricki-Lee Gerbrandt spoke on the panel “The populist playbook and attacks on press freedom” at the conference “Democracy, Law & Independent Journalism” hosted at the Bonavero Institute of Human Rights at the University of Oxford. Event details here: https://www.law.ox.ac.uk/content/event/conference-democracy-law-and-independent-journalism
Ricki-Lee Gerbrandt presented at The Future of the Online Safety Act Conference
Ricki-Lee Gerbrandt presented her research on the protection of journalistic content and the press in the UK Online Safety Act at the University of Sussex conference on the Future of the Online Safety Act.
Lab co-organizes NYU-KCL-UCL Workshop in Practical Philosophy
The Digital Speech Lab co-organized a two-day workshop with the YTL Centre at King’s College London, hosted by the Legal Studies Program at NYU Abu Dhabi with the support of the NYU-AD Philosophy Program. Speakers included Sarah Fisher (Cardiff), Michael Hannon (Nottingham), Jeff Howard (UCL), Jonathan Kwan (NYU), Sarah Paul (NYU), Massimo Renzo (KCL), Matthew Silverstein (NYU).
Jeffrey Howard appointed to Ofcom's Online Information Advisory Committee
Jeffrey Howard will serve on the Online Information Advisory Committee at Ofcom. Press release here: https://www.ofcom.org.uk/about-ofcom/structure-and-leadership/ofcom-establishes-online-information-advisory-committee
Philosophy of Content Moderation Conference
Jeff Howard co-organises the first International Conference on the Philosophy of Content Moderation, held in Monterey, California, with Étienne Brown (San Jose State University) and Hanna Gunn (University of California, Merced). Details here.
Ricki-Lee Gerbrandt spoke at the Digital Constitutionalism Academy Conference in Florence, Italy
Ricki-Lee Gerbrandt presented her research on the protection of high-value news in AI driven content moderation at the Digital Constitutionalism Academy Conference in Florence, Italy.
Jeff Howard speaks at University of Glasgow
Jeff Howard gave a draft paper, “Incitement to Self-Harm,” at the joint Political Theory Colloquium convened by the University of Glasgow and University of Strathclyde.
Jeff Howard lectures at Hong Kong University
Jeff Howard spent a week based at the AI & Humanity Lab in the Department of Philosophy at Hong Kong University, delivering lectures to the MA students and speaking on “The Ethics of Amplification” at the Lab Colloquium.
Jeff Howard publishes post on Meta’s policy changes
Jeff Howard was invited to write a post for the Public Ethics blog evaluating the various recent changes to Meta’s policies on issues such as fact-checking. You are read the post here:
Lab submits evidence to Meta Oversight Board on case about human rights defenders
Ricki-Lee Gerbrandt (UCL) and Jeffrey Howard (UCL) co-authored a submission to the Oversight Board in response to its call for public comments (“Content Targeting Human Rights Defender in Peru”)
We can find the Board’s call for comments here and our submission here.
Lab submits evidence to DSIT call on misinformation and UK summer riots
Jeffrey Howard (UCL) and Maxime Lepoutre (Reading) collaborated on a submission to the Department of Science, Innovation and Technology, which put out a call for evidence on the role of misinformation in leading to the 2024 summer riots — and how the Online Safety Act could be used to reduce the risk that such incidents will be incited online.
You can also see a separate submission by Digital Speech Lab Faculty Fellow Beatriz Kira (Sussex) here.
Jeff Howard speaks at UN Internet Governance Forum in Riyadh
Jeff Howard spoke on a panel (Strengthening Content Moderation through Expert Input) at the Internet Governance Forum in Riyadh, alongside Emilar Gandhi (Meta), Conor Sanchez (Meta), Naomi Shiffman (Oversight Board), and Tomiwa Ilori (B-Tech Africa Project by UN Human Rights).
Lab submits evidence to Oversight Board on cases about UK summer 2024 riots
Ricki-Lee Gerbrandt (UCL), Jeffrey Howard (UCL), and Maxime Lepoutre (Reading) collaborated on a public comment to the Oversight Board on cases involving incitement, hate speech, and misinformation that may have contributed to the summer 2024 UK riots.
You can find the Oversight Board’s call for comments here and our submission here.
Jeff Howard gives keynote at TUM Content Moderation Lab
Jeff Howard gave a keynote address (“The Imperative of Moderation”) at the conference “Facilitating Constructive Dialogue: Toxic Online Speech” hosted by the Content Moderation Lab, part of the TUM ThinkTank at the School of Politics and Public Policy at the Technical University of Munich. The discussants were Rebekka Weiß (Microsoft), Till Guttenberger (Bavarian State Ministry of Justice), and Miguelángel Verde (Wikimedia Foundation).
New Ideas in Legal & Political Philosophy of Online Speech
The Digital Speech Lab organised a one-day event featuring early- and mid-career scholars presenting new work on the legal and political philosophy of online speech. Participants included Sarah Fisher (Cardiff), Iason Gabriel (Google DeepMind), Jonathan Gingerich (Rutgers), Kai Spiekermann (LSE), David Axelsen (Essex), Jeffrey Howard (UCL), Ricki-Lee Gerbrandt (UCL), Tena Thau (UCL), Robert SImpson (UCL), and Kyle Van Oosterum (Oxford).
Jeff Howard presents work at American Political Science Association annual conference.
Jeffrey Howard presented his paper “Moderation by Machine” at the annual conference of the American Political Science Association in Philadelphia.
Sarah Fisher presents at workshop on social norms and oppressive structures
Sarah Fisher presented a paper co-authored with Jeffrey Howard and Beatriz Kira, “Moderating synthetic content” at a workshop on social norms and oppressive structures in Manchester: https://events.manchester.ac.uk/event/event:e8t-lw7jrp3p-mv3ka1/social-norms-and-oppressive-structures.
Sarah Fisher presents at Society for Applied Philosophy Annual Conference
Sarah Fisher presented her paper “Large language models and their big bullshit potential” at the Society for Applied Philosophy’s Annual Conference, which took place in Oxford: https://www.appliedphil.org/society-for-applied-philosophy-annual-conference-2024/
Lab runs workshop on Communication in the Digital Age.
A two day workshop exploring new work on communication in the digital age.
Sarah Fisher presents at workshop on genre and conversation
Dr. Sarah Fisher presented her paper “Synthetic self-editing” at a workshop on genre and conversation, which took place in Reykjavik: https://sites.google.com/view/genreandconversation/home
Lab co-runs workshop in Oxford on Social Media Corporations: Risks, Rights, Responsibilities
The Digital Speech Lab has co-organised a two-day workshop, in conjunction with the Oxford Institute for Ethics in AI, to be hosted at Magdalen College, Oxford. You can find the full schedule here: