Tech Giants Face Growing Scrutiny Over AI-Driven Content

Tech giants like Meta and Google are facing increased scrutiny over their use of artificial intelligence (AI) on their platforms. A recent string of lawsuits aims to undermine the long-held notion that these companies have immunity from liability for content posted by their users.

The issue centers around Section 230 of the Communications Decency Act, a law passed in 1996 that protects websites from being sued over user-generated content. However, recent court cases suggest that this law is weakening and may not protect tech giants from accountability.

In New Mexico, a jury found Meta liable for child safety issues related to its platform. Meanwhile, jurors in Los Angeles held the parent company of YouTube, as well as Google’s YouTube, negligent in a personal injury trial.

The plaintiffs argue that these companies are intentionally engineering addiction in minors with their products, and that the combination of features like autoplay, recommendation algorithms, and notifications creates a “digital casino” environment. They claim that this has led to serious mental health problems for young users.

Google is also facing allegations over its AI Mode feature, which creates summaries and links based on user input. The plaintiffs argue that this feature exposes personal identifying information (PII) of Epstein victims without consent, leading to harassment and fear.

Experts say that these cases could potentially reach the Supreme Court, where they could determine whether tech giants should be protected by law against claims related to AI-driven content.

“The questions are only becoming more and challenging,” said Farid Johnson, policy director at the Knight First Amendment Institute. “We’re pushing Congress to enact a more measured approach that allows tech companies to obtain Section 230 protections as long as they meet certain conditions related to data privacy, platform transparency, and other prerequisites.”

The issue highlights the growing complexity of online content moderation and the need for greater accountability from tech giants. As AI continues to play a larger role in the digital landscape, it’s clear that these companies must take responsibility for their actions and ensure that their platforms are safe and respectful for all users.

Source: https://www.cnbc.com/2026/04/03/meta-google-under-attack-court-cases-bypass-30-year-old-legal-shield.html