This week, the Supreme Court is hearing two cases that could upend the way we’ve come to understand freedom of speech on the internet. Both Gonzalez v. Google and Twitter v. Taamneh ask the court to reconsider how the law interprets Section 230, a regulation that protects companies from legal liability for user-generated content. Gizmodo will be running a series of pieces about the past, present, and future of online speech.
When I was little, my mom laid down her two golden rules: “don’t be mean” and “don’t hurt others.” As I grew up, I did my best to uphold these values and stay away from haters, first in the real world and then in the digital one. In the early aughts, when people were slowly starting to adopt the internet en masse, this was largely easy. However, as the years went by and social media began to rise, it became harder. Hate, extremism, vitriol, harassment, and misinformation filled up screens—the internet was inescapable and awful.
As part of my job, I cover what goes on in online communities across the internet, which involves some pretty horrible content. You have high-profile people spouting misinformation about antidepressants, covid-19, and “herbal abortion teas” that in some cases are literal poisons. There’s also a lot of hate—hate towards the Jewish community, hate towards experts who attempt to correct misinformation, and hate for someone literally breaking their back in a horrible accident. And that’s only the tip of the iceberg.
It seemed crazy to me that platforms could get away with allowing content so vile, and in many cases dangerous, on their platforms. It’s not like they can’t legally do something about it. Under Section 230, a provision in the Communications Decency Act of 1996, online platforms are allowed to moderate objectionable content. Most importantly, though, Section 230 gives platforms a shield that frees them from legal liability for a lot of content that users post.
Given social platforms’ sketchy track record in content moderation, it seemed to me like Section 230 was a cushy law they didn’t deserve. Don’t get me wrong: I’m not against free speech. I mean, look at my profession. But I do think that the nauseating, harassing, and dangerous speech on the internet can be harmful to us as individuals and as a society. Therefore, when the Supreme Court agreed to take up a case related to Section 230, I saw it as a good thing.
Laptops, Tablets, & MoreThe next big sales weekend of the year is approaching and Lenovo has already busted open its doors with some major discounts.
The Supreme Court will hear oral arguments in Gonzalez v. Google on Tuesday. The case was brought forward by the family of Nohemi Gonzalez, a 23-year-old American college student who was one of nearly 130 people killed in Paris in 2015 by members of ISIS. Gonzalez’s family argues that Google aided ISIS when it recommended the terrorist group’s videos on YouTube, a violation of federal anti-terrorism law. Google, meanwhile, claims Section 230 protects it from such claims. The court is expected to deliver its decision on the case this summer.
Let them strike it down, I thought dramatically. Maybe, that was the incentive companies needed to clean up their swampy platforms. I’m far from the only person who wants to see Section 230 gone. Both Democrats and Republicans dislike the provision, although for different reasons. President Joe Biden has called for reforming Section 230 and removing platforms’ liability shield, while former President Donald Trump wanted to throw it out altogether.
Despite my strong feelings about how Section 230 has contributed to the internet’s toxic landscape, today I’m here to tell you that I don’t think Section 230 should be repealed. I came to this conclusion after speaking with Jeff Kosseff, a cybersecurity professor at the U.S. Naval Academy and author of “The Twenty-Six Words That Created the Internet,” which analyzes Section 230 in-depth and presents the costs and benefits of protecting online platforms.
Kosseff is widely considered one of the most preeminent Section 230 experts out there. When I shared my concerns about Section 230 and the state of the internet, he told me he agreed that “there are substantial harms out there” that need to be addressed. However, he doesn’t think Section 230 is responsible for most of our complaints.
Overall, speaking with Kosseff helped me separate Section 230 from the angry public discourse on both sides of the spectrum.
That doesn’t mean I think Section 230 is perfect. Even Kosseff is in favor of modest amendments. I’ve come to think of the internet like a house, with Section 230 as its foundation. It’s a good base, but the house also needs things like a frame and a roof. It needs to be cared for and maintained, repaired, and even modified over time—or else it all comes crashing down.
Check out our full Q&A with Kosseff below.
This interview has been edited for length and clarity.
What would you say to people like myself who believe that the Internet under Section 230 has turned into a dangerous swamp that can, in some cases, threaten lives?
Jeff Kosseff: I fully agree that there are substantial harms out there and they’re a serious problem that we need to address. But the [question] is: Is Section 230 to blame for them? And I think you have to look at why Section 230 was passed in the first place.
There was a case that [Section 230] was specifically addressing that said that the way the platforms reduce their liability is not to do any moderation. Section 230 addresses that by saying ‘we’re going to remove this disincentive and let the platforms come up with moderation policies and procedures that best serve their users. So, I think that when you’re blaming Section 230 for the internet and all of the harms on there, I think you have to break it up into what you’re looking at.
There are certain things that Section 230 does protect platforms from liability [from] and it’s primarily been defamation. That’s been the main issue. [When it comes to sex trafficking,] there is actually an explicit exception in Section 230 for sex trafficking. There’s also an explicit exception that’s always been in Section 230 for federal criminal law. There’s also an explicit exception for intellectual property law. And all of those things kind of get conflated.
Then there’s also a lot of what’s known as lawful but awful content. And the bottom line is that the First Amendment protects a whole lot of really bad stuff. With or without Section 230, the government can’t impose penalties for that content. That’s stuff like misinformation, hate speech—that stuff that Section 230 is often blamed for. But Section 230 actually facilitates the ability of platforms to develop policies to block content without becoming liable for everything.
So, I fully agree with you that there’s a lot of really harmful stuff out there. I just think that it would be too easy to attribute all of that to Section 230 when the bottom line is that we’re not Europe. In Europe, they have things like hate speech laws, which themselves have been abused. There have been politicians who have gotten content taken down in Europe that’s [critical of] them under the hate speech laws. There are a number of countries all around the world that for the past five years have passed misinformation laws, and what they use that for is to take down speech and punish speech that criticizes the official government line.
But that’s also not a Section 230 issue in the United States. Under our current First Amendment precedent, we could never have a misinformation law. Perhaps the Supreme Court would radically reinterpret the First Amendment, but I think that would be a really bad place to go. Because while I think misinformation is a real problem, I think it’s a bigger problem to give the government the ability to define what’s misinformation.
Gizmodo: That’s a great point. I know some people, myself included, probably blame Section 230 for a lot of harmful things that aren’t Section 230's fault. Do you have any ideas on why people have turned Section 230 into a scapegoat for everything that’s wrong with the internet?
Kosseff: Well, I’m sure it has nothing to do with the fact that there is a book titled, “The 26 Words That Created the Internet.” That does not play any role in this subject. So, I think Section 230 is responsible for the business models of platforms, large and small, that host user content. They frankly could not exist in their current format without Section 230. They could exist, but they’d be very different. So because of that, everything bad that happens on the platforms is attributed to Section 230, when in fact, Section 230 often is part of the solution.
There are some specific types of cases where Section 230 is a problem or does pose a barrier for plaintiffs. Though it tends to be things like defamation [or] certain types of harassment if it rises to the level of actually being a viable action. But even then, there are still First Amendment protections for the platforms that are really hard to overcome. I mean, defamation, even against the person who posted, is a really hard claim to bring in the United States. Even without Section 230. It becomes even harder when you’re bringing it against the platform that distributed that content.
Gizmodo: I wanted to ask you a question about the origins of Section 230 and how the situation in the ‘90s differs from what we have now. One of the original intentions of Section 230 was to prevent fledgling technology companies from being slammed with tons of lawsuits over user-generated content that would just be impo Source: Gizmodo