All Commentary
Monday, December 9, 2024 Leer en Español
Laptop computer on a table outside.
Image Credit: ahmad baiquni - Pixabay

Section 230 Promotes the Marketplace of Human Ideas. But What about AI?


Legislators should tread carefully around Section 230 protections as we enter the age of AI.

Section 230 of the Communications Decency Act of 1996 has been praised by many as the “twenty-six words that created the Internet”—while derided by others as a legal loophole that empowers tech giants. For years, this controversial statute has been contested in Congress and the courts, and recently the Section 230 debate has taken another turn. The law has been invoked in a number of legal challenges to protect recommendation algorithms—most notably last year in the Supreme Court case Gonzalez v. Google. Now, legislators are considering an entirely new application of the statute: whether it covers the outputs of technologies such as generative AI.

A faithful reading of the law would suggest the answer is no. Extending these protections might yield marginal benefits for AI innovation, but Section 230 is meant to protect human-generated content on platforms and encourage greater communication and expression online. The human element is integral to Section 230 and tethers the law to speech benefits for Internet users.

The importance of these protections cannot be overstated, and lawmakers looking to narrow or repeal Section 230 entirely would do well to remember what it guards against.

Discussions about the scope of the law are happening within debates about whether it should be construed narrowly or repealed entirely. Over the summer, Representatives Cathy McMorris Rodgers (R-WA) and Frank Pallone (D-NJ) introduced a bill that would sunset Section 230 in its entirety. In their statement, the congressmen claimed that “Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans. Especially children.” At the time, congressional staff were also mulling over how the law impacts generative AI, and according to Axios, there was broad agreement that generative AI should not be protected by Section 230.

Similarly, in Gonzalez v. Google, the Supreme Court considered whether Section 230 protected interactive computer services making targeted recommendations to users. In a case that could have significantly narrowed the application of Section 230, the Court declined to reach the merits and remanded the case in light of its decision in Twitter v. Taamneh. However, during oral arguments, Justice Gorsuch questioned how content that is fully AI-generated would impact the scope of the contested law. Here again, AI was considered in the context of limiting Section 230’s application.

Both the sunset bill and Gonzalez v. Google highlight how AI figures into the Section 230 debates and use it either to delineate the outer bounds of the law or to provide support for shrinking or eliminating it completely. But the virtues of Section 230 are apparent even to those who want it repealed. Even the sponsors of the sunset bill recognize that it “helped shepherd the internet [sic] from the ‘you’ve got mail’ era into today’s global nexus of communication and commerce.”

While legislators can appreciate the effects of the Internet statute, they overlook what it sought to guard against.

Originally, Section 230 was part of a larger statute aimed at regulating child indecency and obscenity online. These provisions were later deemed unconstitutional, but Section 230 remained to protect interactive computer services from being treated as the publishers or speakers of information provided by others and preserved their ability to engage in good-faith content moderation without incurring civil liability. Strictly adhering to the text, the law is aimed at intermediaries and does not protect the content creator. While the amount of human input varies, generative AI often possesses more creator-like qualities. This is a view shared by former representatives Ron Wyden (D-OR) and Christopher Cox (R-CA), who authored Section 230.

More fundamentally, Section 230 was aimed at addressing the “moderator’s dilemma,” a situation in which platforms were disincentivized to take any content down since even modest attempts at moderation would put them at risk of becoming legally responsible for all the content they hosted. In effect, any moderation decisions opened platforms to liability as publishers. While removing some content could get them sued—because they would then be considered publishers and thus be liable for everything they host—leaving sensitive content up could hurt their reputation. Presented with these two options, the best solution for platforms was the worst for users—highly filtered and moderated online spaces, or none at all. To avoid liability, platforms had to host less speech.

Ultimately, private platforms are under no obligation to adhere to the First Amendment, but there is social value in their commitment to free speech principles and more, rather than less, user content. Section 230 aims to further the marketplace of ideas in digital spaces. While policymakers should also avoid obstructing AI innovation, Section 230 stands apart as a legal vehicle for encouraging speech online.


  • Rachel Chiu is a J.D. candidate at Yale Law School and a Young Voices contributor focused on online speech and technology policy.