Section 230 of the Communications Act, which prevents online platforms from being held liable for content posted by their users, will be evaluated by the Supreme Court next season. How it might be affected is anyone’s guess, but we can be sure that the regulatory landscape for the technology will be rather different this time next year.
We’ve discussed Section 230 many times on TechCrunch, and legal definitions and precedents can be found elsewhere, so we don’t need to delve into the details for now. Suffice it to say that this section of the law essentially says that as long as reasonable steps are taken to deal with illegal and objectionable material on their platforms, companies like Alphabet and Meta cannot be held responsible for that material.
There are limitations and exceptions to how this protection works, but the law acts as a “safe harbor” in which companies can operate without worrying about being sued for defamation over something posted by a user.
But the question that has dogged these companies for years now is the exact extent of those restrictions and exemptions, and whether the platforms have perhaps too much leeway in how they handle content like COVID-19 misinformation, live-streamed crime and hate speech. speaking. However wise or unwise Section 230 was when it was written, over the last decade the industry and the world have evolved to the point where it may be time, at the very least, for some sensible additions and revisions to it.
The case the Supreme Court announced it intended to take up, Gonzalez v. Google, claims the latter is criminally liable for allowing certain content from the Islamic State terrorist group to remain on its platform, leading in part to the 2015 Paris attacks that killed 130 people. So, not that it matters for the purposes of the Supreme Court taking it up exactly, but the claim here has real weight, unlike some complaints about the law.
Amicus briefs are already flooding in, because the worst-case scenario — Section 230 basically being repealed entirely — would be devastating for countless online platforms and companies. As many have pointed out, this limitation of liability is complex and important to all kinds of free expression on the Internet, and removing it would open the door to abuse from every direction.
At the very least, such an outcome would cause panic across industries, with tech companies scrambling to protect themselves, investors withdrawing and withdrawing shares, and users failing as the services they rely on change in fundamental ways.
It’s not like the Supreme Court is expected to issue an opinion saying “the platforms are fully responsible for everything that exists, immediately and irrevocably” or something like that. Small changes make a big difference, and if the court simply ruled that Section 230 doesn’t protect Google in this case, every lawyer in the country would be rushing to apply this new definition of the law to policies, behaviors, features, everything. It might even (though not likely) pass the ball to the FCC, which has been the agency of record for most of the Communications Act for the better part of a century.
Speculating on the likely outcome at this early stage is probably fruitless, but this highly unpredictable situation makes it almost certain that the already numerous efforts to revise and replace Section 230 will multiply and intensify. Given the current political divide in general, and then in particular how divided Congress has been on this issue, the likelihood of a new law gaining bipartisan support in the short term is low. And with midterm elections looming, much depends on the new House and Senate as well.
The court’s decision will be pivotal, it turns out, with any outcome prompting lawmakers to take action around it, perhaps even preemptively. And in the public debate, as with Net Neutrality, it will be a frenzy of opportunism, FUD, and technically misleading material. Nothing in Section 230 prevents any industry with skin in the game from doing what it can to influence the debate.