
Law
Jun 20, 2023
Author, "Section 230 Immunity Isn’t a Guarantee in a ChatGPT World," Bloomberg Law, Jun'24
The article argues that large language models (LLMs) like ChatGPT may significantly weaken the practical importance of Section 230 of the Communications Decency Act. Section 230 was designed to protect online platforms from liability for third-party content they host or recommend. But LLMs differ from traditional social media or search engines because they generate original responses rather than simply publishing third-party material. Since Section 230 immunity depends on content being created by another content provider, AI-generated outputs likely fall outside its core protection.
Even so, the absence of Section 230 immunity does not automatically mean AI companies will face liability. Traditional legal doctrines such as defamation and product liability do not fit neatly with probabilistic systems that lack intent and are not legal persons. Courts have historically been reluctant to impose liability for words and ideas, and AI-generated content raises novel questions about fault and causation. As a result, existing law contains a gap that may require new, AI-specific legal standards to address harms caused by generative models.


