Extremist Speech, Compelled Conformity, and Censorship Creep
Open PDF in New TabARTICLE
Extremist Speech, Compelled Conformity, and Censorship Creep
Danielle Keats Citron*
Silicon Valley has long been viewed as a full-throated champion of First Amendment values. The dominant online platforms, however, have recently adopted speech policies and processes that depart from the U.S. model. In an agreement with the European Commission, the dominant tech companies have pledged to respond to reports of hate speech within twenty-four hours, a hasty process that may trade valuable expression for speedy results. Plans have been announced for an industry database that will allow the same companies to share hashed images of banned extremist content for review and removal elsewhere.
These changes are less the result of voluntary market choices than of a bowing to governmental pressure. Companies’ policies about extremist content have been altered to stave off threatened European regulation. Far more than illegal hate speech or violent terrorist imagery is in EU lawmakers’ sights, so too is online radicalization and “fake news.” Newsworthy content and political criticism may end up being removed along with terrorist beheading videos, “kill lists” of U.S. servicemen, and instructions on how to bomb houses of worship.
The impact of extralegal coercion will be far reaching. Unlike national laws that are limited by geographic borders, terms-of-service agreements apply to platforms’ services on a global scale. Whereas local courts can order platforms only to block material viewed in their jurisdictions, a blacklist database raises the risk of global censorship. Companies should counter the serious potential for censorship creep with definitional clarity, robust accountability, detailed transparency, and ombudsman oversight.
Continue reading in the print edition . . .
© 2018 Danielle Keats Citron. Individuals and nonprofit institutions may reproduce and distribute copies of this Article in any format at or below cost, for educational purposes, so long as each copy identifies the author, provides a citation to the Notre Dame Law Review, and includes this provision in the copyright notice.
*Morton & Sophia Macht Professor of Law, University of Maryland Francis King Carey School of Law; Affiliate, Stanford Center on Internet & Society, Yale Information Society Project. I am grateful to Leslie Henry, Sarah Hoyle, Margot Kaminski, Kate Klonick, Emma Llans´o, and James Weinstein for closely reading drafts, and to Tabatha Abu El-Haj, Jack Balkin, Susan Brison, Ryan Calo, Chapin Cimino, Will Duffield, Ed Felten, James Fleming, Brett Frischmann, Mike Godwin, Abner Greene, Anil Kalhan, Daphne Keller, Paula Kift, Michael Nelson, Megan Phelps-Roper, Neil Richards, Flemming Rose, Marc Rotenberg, John Samples, Alexander Tsesis, and Benjamin Wittes for advice. Participants in the Fordham Law Review’s “Terrorist Incitement on the Internet” symposium, Twitter’s Trust and Safety Summit, Drexel Law School’s Faculty workshop, the Cato Institute’s “The Future of the First Amendment” conference, and the Yale Information Society Project’s Lunchtime Talk Series provided helpful feedback. Thanks to Emily Bryant and Ian K¨onigsd¨orffer for superb research assistance. I owe a debt of gratitude to Susan McCarty for her expert editing, Frank Lancaster for assisting in all things, and Dean Donald Tobin for supporting this research. Special thanks to the editors of the Notre Dame Law Review, especially Shannon Lewry, for their insight, help, and patience.