neeva wordmark
Join the Waitlist
icon arrow right

Why repealing Section 230 is not enough

By Sridhar Ramaswamy on 10/26/2020

There has been a lot of talk about repealing Section 230. With some merit, Section 230(c)(1) has been called "the twenty-six words that created the Internet." (This law was passed in 1996; talk about prescience!) Simply put, it allowed for digital platforms to host user content without being responsible for what the content said. While there has been clarification and litigation, the essence has stood the test of time.

It is important to understand that this law came at a time when there was no Facebook, Twitter or YouTube. And there was real fear that a torrent of lawsuits would squelch the fledgling Internet.

Fast Forward to Now

There are a few platforms whose ability to shape conversation has become incredibly important: Facebook, Twitter, YouTube and WhatsApp have become the public squares of today. And the software that controls them has enormous power over humanity.

At many levels, these platforms are struggling. They struggle to protect democracy and civil discourse. They can barely keep out what is outright illegal. And worse, they are getting more entrenched over time, not less. Free markets are failing us, platitudes about "competition being a click away" notwithstanding.

At one level, a company is a completely unsuitable construct to police the speech of the world. Given that we live in a world of supervoting shares, do we really want Mark Zuckerberg's grandchild to decide "truth" for us? (Thanks to his supervoting shares, this is more of a predictable outcome than you think. And no, to the best of my knowledge, he doesn't have a grandchild. That's my point.)

And if you are an executive at the company, it is not clear what you are expected to do. Back when I was running Google ads, I once asked Senator Ron Wyden--who had been the lead on Section 230-- what he wanted us to do about Russian interference in the elections. He waved a finger at me and said, "Stop all the bad actors."

Most politicians, citizens, and even tech workers would agree with the sentiment. The problem of course is in how you do that.

While most people are aware that Section 230 protects online platforms from being sued about content that they host, it also has a "good samaritan" provision that allows these platforms to screen or remove content that they think is "offensive".

But do we really want a small, isolated team making decisions in secret about issues that might literally change the world?

"Repeal 230" is a popular thing to say these days. What happens after that? A spate of lawsuits against social media companies about content that they are hosting? Does this also not mean that no new startup will come up in user-generated content for fear of lawsuits?

Better Questions Lead to Better Answers

We need to understand the nuance of the underlying problem, and move towards some standards and shared definitions. Here are some concepts that are difficult for standalone tech companies to decide and rule on:

  • Should content from influential people (with a lot of followers, let's say) be treated differently than content from people with very few followers?
  • Is there a commonly accepted definition for hate speech or falsehood or misinformation?
  • How much exposure should recommendation algorithms (which are under the control of the platforms) generate for problematic content before it is subject to more review?

Now that we've laid out some topics that require more thought, what are good first steps that we should be taking to drive change?

  • We need common definitions. These definitions need to be arrived at by a group of people who are clearly not controlled by any individual company.
  • We need the tech companies to fund the creation of a shared system that can be used by all companies to generate labels for these definitions. Such a system should be available at a low cost for startups.
  • This shared system will make it easier for new companies to create innovative products and compete with the giants.
  • Whether a site allows porn or hate speech is up to them. They should clearly show what they do so that people can make choices about what product they want to use and what content they want to see.
  • The degree of liability that a company incurs should then be dependent on the kinds of policies that a company adopts: If a site thinks that it is OK to run unsubstantiated conspiracy theories, perhaps they should be subject to lawsuits about those posts.

We need the tech giants to create a system that can be used by all of them: Individual companies can decide what they want to allow and what labels to put on content, but it is critical that we think of this as a system that should be subsidized by the big tech companies while being accessible to everybody (startups, for example) for a low cost.

Stopping the spread of misinformation and preventing too much decision-making power from concentrating in the hands of a few companies are incredibly complex problems that can literally shape electoral as well as life and death outcomes. For us to find better answers to these problems, we need to start asking better questions. While it is clear that the blanket protections of Section 230 don't make sense for today, as with another topic, "repeal" is easy and tempting. What we need to do is work on the "replace" part now!