Elon Musk’s imminent purchase of Twitter, and his view that free speech online should extend as far as the law allows, has led some progressive thinkers to exclaim their despair. But not so long ago, the fundamental distrust of anyone who wanted to restrict free speech was a pillar of progressive thought, even at the recognition that absolute free speech was not feasible or desirable. Revisiting some of those older arguments about why constraining free speech can lead to uncritical belief in authority is needed. That is not to say there should be no moderation of online speech when it comes to personal attacks. Conflating the freedom to dissent and even engage in conspiratorial thinking, with the freedom to attack others is a confusion that only libertarians make, and we should avoid, argues Peter Godfrey-Smith.
*Peter Godfrey-Smith will be speaking at HowTheLightGetsIn in Hay 2022, June 2 - 5. Check out the programme and book your tickets here.*
Many people, especially on the "progressive" side of politics, seem in recent years to have given up on a combination of views about free speech that was common not long ago. The position I have in mind is one that accepts that the basic principles and proper borders in this area are difficult matters, and an ideal of pure or absolute free expression is not feasible, but combines this recognition with a strong tendency to favor free expression in practice and to be habitually distrustful of restriction – expecting it to go too far, to misfire in various ways, and do more harm than good.
The move away from this combination of attitudes has probably been partly due to changing agendas within progressive politics (heightened concern about racism and hate speech, for example). But alongside this, if an explicit defence of the shift were to be given, it might be that the landscape of information exchange has been transformed in ways that make a new approach necessary. The internet has opened up the possibility of instant and wide dissemination of ideas that have adverse consequences on an unprecedented scale. Information flow is also subject to less gate-keeping by publishers, editors, and other old-media forces. With the stakes now so high, more restriction is needed, and new media and technology companies have a responsibility to exert a greater level of control over what appears on their platforms.
The problems this attitude responds to are real, but I don't support the shift. The new situation we find ourselves in has a mix of features, some of which give plenty of support for the attitude I described at the start, an attitude where the presumption against restriction is strong.
The situation that is putting pressure on older defences of free speech is a combination of the chaotic nature and unusual efficacy of social media.
Defences of free speech might be put into two broad categories. One kind appeals to the moral value of individual autonomy, or something along those lines. Another kind of argument is based on consequences, costs and benefits, and the search for political arrangements that are likely to enable a society to enable human flourishing and avoid missteps. As J.S. Mill noted in his classic defence of "liberty of thought and discussion" in the nineteenth century: authorities are not infallible; in cases when they are largely right there may be something to learn from radically contrary opinions; and even well-founded ideas deaden when they are not subject to active debate.
I think there is a good deal to be said for both kinds of defence. Given the current perception that the more practical, consequence-based arguments are weaker than before, those are the ones I will discuss here.
The situation that is putting pressure on older defences of free speech is a combination of the chaotic nature and unusual efficacy of social media. Irresponsible and ill-informed writing can have an unprecedented and destructive reach. That is true, but there are problems, new or heightened ones, on another side as well. Agencies within the increasingly powerful "adminstrative state" have incentives to present very clear public messages and maximize their impact. Inside these organizations, the process of working out what the message is to be also seems strongly affected by particular kinds of group dynamics. Messages are simplified, qualifications and uncertainties are stripped away, and particular risks and concerns are highlighted over others. In this setting, one of Mill's arguments takes a new form. When dealing with complex modern problems, we will almost inevitably get some things wrong. Any policy choice might need to be revised or modified. But the social dynamics I sketched above push against the presentation of ideas in ways that acknowledge this; the tendency instead is a "closing of ranks" in the service of effective messaging. Along with this goes a temptation, in our wild-west informational landscape, to restrict or marginalize dissent. Then when things do go wrong, as they sometimes will, it will be particularly harmful. The consequences for trust and social order will be worse than if an unruly back-and-forth of ideas had been tolerated, and people had more responsibility for where their beliefs ended up. There is an evident temptation, within governmental institutions that wield power and their media allies, to simplify and exaggerate in the service of present needs. But there will be future needs, future crises, and a need for future trust.
There is an evident temptation, within governmental institutions that wield power and their media allies, to simplify and exaggerate in the service of present needs.
A view closely related to mine was capably expressed earlier this year by the creators of Substack, in a blog post defending their own openness to controversial writing on their forum. They expressed concern about an erosion of confidence in essential institutions, and argued that maintaining openness to controversial ideas is "a necessary precondition for building trust in the information ecosystem as a whole." As they put it, "when you use censorship to silence certain voices or push them to another place, you don’t make the misinformation problem disappear but you do make the mistrust problem worse."
Discouraging and filtering personal attacks and attempts to do harm of this kind is an appropriate role for "moderation" of social media expression, especially if we value its role as a forum for real discussion.
In the most recent round of discussion of these issues, in response to Elon Musk's plan to change policies at Twitter, some critics of this opening-up have emphasized the problem of attacks on individuals, attempts to harm reputations falling short of libel, and the like. Malicious attacks on individuals are not the sort of thing that the defence I am giving would protect. Discouraging and filtering personal attacks and attempts to do harm of this kind is an appropriate role for "moderation" of social media expression, especially if we value its role as a forum for real discussion. A private company, like Twitter, can fine-tune its policies with that problem in mind, while opting for very wide scope in its commitment to the expression of opinion on matters of the day. That's the way I hope things go with Twitter. The opinions allowed in will then include, for example, conspiracy theories of various kinds. These movements have become a problem in the informational landscape and I understand the desire to act against them. But do we want a tech company determining which conspiracies are not fit for discussion of this kind? (QAnon out; what about JFK?) We are better off with all of them in the sunlight.
The personal-attack side of the problem of free expression only bears on questions related to controversial content when the advocacy of free speech takes the form of a pure, libertarian view, where both kinds of restriction might stand or fall together. They do not, and the case for allowing unconstrained debate, including input from very controversial voices, on the genuine problems we face remains strong.