Skip to content
Thinking

What responsibility does Twitter have to free speech?

What responsibility do social media companies like Twitter have to free speech? It depends on whether they are “landlords” or “publishers.”
twitter free speech
Credit: nicoletaionescu / Adobe Stock
Key Takeaways
  • The classic defense of free speech comes from John Stuart Mill. He argues that we can say (or do) what we want, so long as we do not harm another person (or impinge on their liberty).
  • Social media platforms can either be “landlords” (hosting a space) or “publishers” (providing content they deem preferable). The role we assign them determines their responsibility. 
  • Does social media improve or degrade society? Is Twitter a marketplace of dialectic or a juxtaposition of screaming vitriol?

Oliver works in a bar. He’s had a long night, and he’s not in the mood for this. He is used to meeting his fair share of drunks, but the guy he’s facing now is a different sort altogether. He’s a wobbling, stinking, drooling, slurring kind of drunk. His head barely off the bar, the man manages to somehow order a double whiskey. Should Oliver serve him the drink?

The moral question is what responsibility a provider of some good or service has in deciding who gets those goods or services. And it’s not only about drunks; it’s about social media, like Twitter, as well.

Twitter vs. free speech

There’s a lot of bile on Twitter. There are inflammatory, discriminatory, and abusive things thrown at every possible kind of person. There are racists, sexists, and bigots spewing vitriol a thousand times a minute, visible to anyone with an app or a screen to see.

The classic “free speech” defense comes from John Stuart Mill, who said that only that which harms someone should be banned or made illegal. So, inciting a riot or issuing death threats should be censored. So, too, should pornography or graphic videos in a forum where children might see. But, these are the black and white, clear cut cases. You only have to walk a step before you’re in the fuzzy land of the gray zone.

After all, who determines what is and what is not harmful? Are “jokes” on Twitter about minorities or disabilities harmful or merely offensive? The border between the two is blurred and porous. Or, could we say that discriminatory remarks endorse, celebrate, and normalize discriminatory practices? For Mill, this would not hold water; he argued there needs to be an obvious causal link between my action and the harm caused. But in an overlapping, interconnected world — a world of sociologists and psychologists — is this good enough?

Publishers and landlords

The bigger issue is what responsibility, if any, do social media companies like Twitter have in removing “harmful” content, however it is defined? As it stands, they must obey the laws of a country in which they operate. In India, Turkey, and Pakistan, Facebook must take down thousands of “blasphemous” posts. But what about in liberal, freedom-protecting states?

The cultural ethicist, Faye Lincoln, makes a compelling distinction between “landlord” social media platforms and “publishers.” Landlords will “rent space on their servers so that everyone can gain access to the site.” Publishers, though, “design the templates that people use to connect and communicate with each other, oversee their general use, and promote preferred content.” Landlords are therefore less morally accountable for their content than publishers.

The problem, however, is that Twitter, Facebook, and YouTube deliberately (cynically?) flip-flop between the two, depending on their needs. If social media companies are called before the law of the land, it is quite easy for them to slip into the “we’re just landlords” role. They will say you can no more blame Twitter than the pen, or that Facebook is no worse than the printing press. They are tools or platforms to be used for the big and small, nasty and noble. If they project themselves as landlords, they wash their hands of the content they allow (beyond the legal and “Terms of Service” items).

And yet, when it comes to turning a profit, social media companies are quite happy to manipulate the user’s experience. Facebook, YouTube, and Twitter all have algorithms and tools by which they promote or highlight their “preferred content.” These smoke-and-mirror algorithms are exactly what Elon Musk wants to do away with. When Twitter chooses what you see or don’t see, they become publishers. As such, just like with the newspapers and books we read, publishers ought to be responsible for fact-checking, monitoring legality, and preventing harm or abuse in any form.

The marketplace of ideas

There is one argument that’s used again and again when the issue of censorship, bans, and timeline filtering arises: Free speech is the necessary tool by which progress happens. Only in an unfettered and open forum can we meet with other ideas, and so mortally wound the great monsters of bigotry, prejudice, and dogmatism. It’s an argument Mill himself made. When applied to Twitter, it argues that we should let people say what they want, because it presents alternative viewpoints, some of which might be closer to the truth than the existing, established narrative.

The problem with this, though, is that it’s a somewhat quixotic view of what social media really is. Twitter is not some Athenian forum or dialectic factory, where people listen to alternative viewpoints and politely acquiesce to those of a superior rational argument. Yes, there are small pockets of that, but more often it’s a shouting match. It’s hard to see any kind of productive dialectic amid the juxtaposition. Social media is set up to be an egoist’s outlet. It’s about my opinions, my arguments, my life experience. It’s not about conversation at all, let alone dialectic. Facebook and Twitter, as they exist right now, do not lend themselves to Mill’s dream of “free speech as a tool of progress.”

More questions than answers

When we strip everything else, we have to see social media as the private companies they are. As with Oliver in our opening example, Twitter and Facebook are providing a service. Free speech does not mean free access. If these companies decide this or that person is an unsuitable user of the service, they are quite permitted to do so.

But even this is not so straightforward. Banning someone for expressing their beliefs — however repugnant we find them — is itself an act of discrimination. We are saying to them, “I will not have your kind around here because I don’t approve of your views.” How is it different from evangelical Christians refusing to bake a wedding cake for a gay couple?

Perhaps one way out of the maze might be found in a lesser considered part of Mill’s argument. Mill also argued that even if we might not censor someone, we can still punish someone for violating a duty they have. It might be that we each have a duty to others — to be kind, respectful, and polite — and when we violate this, then we open ourselves to punishment. With freedoms and rights come duties and responsibilities. So, we can say or tweet what we like, but doing so makes us liable to government laws, employers, and our friends.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

As with many ethical dilemmas, it’s an issue with more questions than answers. Technology is moving so fast that we, as a society, have not yet developed the requisite virtues needed to deal with it.

What responsibility do you think social media has to censorship?

Jonny Thomson runs a popular Instagram account called Mini Philosophy (@philosophyminis). His first book is Mini Philosophy: A Small Book of Big Ideas.


Related

Up Next