Thursday, April 02, 2026

Meta's AI assistance to advertisers defeats Section 230, court says

Bouck v. Meta Platforms, Inc., No. 25-cv-05194-RS (N.D. Cal. Mar. 24, 2026) 

Does offering AI enhancements to deceptive ads constitute participating in what makes them illegal for purposes of avoiding section 230? This case answers "yes, relatively easily" and it should be raising flags in a lot of boardrooms.

Plaintiffs here are victims of a pump-and-dump scheme involving shares of a Chinese penny stock. The scammers initially targeted them on Facebook and Instagram through advertisements for investment groups promising handsome returns. For example, in one ad, Kevin O’Leary— “a businessman well-known for his role on Shark Tank”—appears to advertise a private group in which stock tips are shared. In another, Savita Subramanian—Bank of America’s head of U.S. equity and quantitative strategy— “looks to be promoting” spots in a “free trading training” group that boasts “95% accuracy” and 30-40% daily returns. 

Plaintiffs sued Meta for aiding and abetting fraud; negligence; breach of contract; violation of the California Unruh Civil Rights Act, Cal Civ Code § 51; and unjust enrichment, along with promissory estoppel and breach of the covenant of good faith and fair dealing as alternatives to their breach of contract claim. The court allowed the claims for aiding and abetting fraud, negligence, and unjust enrichment to proceed.

Section 230: An “information content provider” is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through . . . any other interactive computer service.” If Meta was sufficiently involved in the “creation or development” of the fraudulent ads, the court reasons, then those ads were not just “provided by” the scammers—they were also provided by Meta. Under 9th Circuit precedent, a website helps to develop unlawful content “if it contributes materially to the alleged illegality of the conduct.”

Plaintiffs’ allegations of material contribution depended on three tools Meta offers to advertisers. (1) Flexible Format: “Meta automatically optimizes the ad and shows it in the format that Meta predicts may perform best” by “selecting the specific images and other content that will be included, the layout, the platform (Facebook or Instagram), and how the ad will be displayed to a particular user (e.g., in the user’s feed, as a story, etc.).” (2) Dynamic Creative: This tool “takes multiple media, such as images and videos, and multiple ad components, such as images, videos, text, audio, and calls-to-action, and then mixes and matches them in new ways to improve . . . ad performance.” This “allows the advertiser to automatically create personalized creative variations for each person who views the ad, with results that are scalable.” (3) Advantage+ Creative: Meta uses generative AI to apply “creative enhancements” to optimize advertisements, including AI-generated text and images. “The alterations may include modifications to images (such as applying different text overlays or modifying the image background), generating variations of the ad’s text to target different audiences, and inserting ‘Call to Action’ buttons, such as a link to purchase a product or join a WhatsApp group.”

The complaint alleged that scammers used these tools at least to create different variations of ads featuring Ms. Subramanian. That was enough to plead the existence of a dispute over whether Meta “contribute[d] materially to the alleged illegality of the advertisements.” “Plaintiffs have averred that Meta participated in the construction of the ads by literally generating, using artificial intelligence, the images and text in the advertisements. That degree of participation is not protected by section 230.” In other words, “optimizing the appearance of an ad to drive engagement” was enough of a contribution to the ads’ illegality to preclude section 230 immunity. Pleading the ability to create “AI- generated text and images” “is more than enough to aver ‘that the tools affect ad content in a manner that could at least potentially contribute to their illegality.’”

Meta argued that its tools were “neutral” and that offending content was exclusively provided by the scammers. But 230 allows services to “structure the information provided by users,” not “to create the information itself.” Plaintiffs alleged that “Meta created the offending information by generating some of the false statements that tricked them into the investment scheme.”

If a scammer tells Advantage+ Creative “that he is interested in an ad promising astronomical weekly investment returns, Advantage+ Creative will spin up a slew of ads that include the provided language and other language, images, and videos it decides will be effective in promoting the user’s chosen message.” Indeed, a journalist from Reuters asked for an ad asking users if they were “interested in making 10% weekly returns.” Advantage+ Creative “generated a slew of ads saying just that and new ads with language like ‘Tired of living paycheck to paycheck? Break the cycle and start earning steady weekly income with our proven system.’ The reporter did not come up with that (patently fraudulent) language; it was all Meta.” It was at least plausible that some of the illegal content (i.e., the fraudulent statements in the ads) was created by Meta, not by the scammers.

Aiding and abetting fraud: “California has adopted the common law rule that [l]iability may ... be imposed on one who aids and abets the commission of an intentional tort if the person ... knows the other’s conduct constitutes a breach of a duty and gives substantial assistance or encouragement to the other to so act.” Meta argued that it neither had knowledge of the scammers’ conduct nor substantially assisted in the execution of their scheme. Plaintiffs alleged that Meta has been repeatedly subject to lawsuits stemming from similar schemes; that Meta itself acknowledged the proliferation of fraud using public figures and celebrities’ images; and that Meta had an “ad review system” in place to screen ads for “violation of [Meta’s] policies.”

Meta responded that knowledge that fraud is occurring generally on its platforms could not have given “actual knowledge of the specific primary wrong” at issue in this case. “In a vacuum, that argument has merit. Under California law, knowledge that something illegal is occurring on a defendant’s platform does not establish that the defendant knew of the particular illegal conduct that injured the plaintiff.” But the court accepted allegations “that when Meta saw the ads in its ad review process, Meta acquired actual knowledge of their fraudulence.”

What about the moderator’s dilemma?  “To be sure, in many cases a defendant could not be charged with actual knowledge of fraud simply because the fraud passed through a routine review process. For that reason, many cases arising in the financial fraud context have required a plaintiff bringing an aiding and abetting claim to show that the defendant had some extra knowledge about the primary fraudster in order to create an inference that the defendant knew of the fraud and passed it through the review process anyways.” But here, “no extra knowledge is required. That is because the advertisements are facially ridiculous.” [This seems like it will create a bit of a problem on the back end of proving classic fraud—where is the reasonable reliance?]

Thus, an ad showing “Savita Subramanian, one of Wall Street’s most respected market observers, purporting to offer stock tips in a WhatsApp group,” not through her employer Bank of America but promoting something called “AI Investment.” “She” touted daily potential returns that were roughly three to four times the average annual return of U.S. equity markets, all for free. “Even a cursory look would warrant suspicion that the ad is fraudulent….  If Plaintiffs succeed in convincing a jury that this ad (and others that are equally preposterous) passed Meta’s ad review process, the jury would be entitled to infer that Meta had actual knowledge of the fraud at the time the ads went out to its users.”

Meta made the obvious point that its ad review is not human review, and that automated systems don’t have the intuitive knowledge that allows this conclusion from a “cursory look.” The court found that response “confounding.” “It was Meta’s decision to use technological review tools to screen ads, and it does not now get to claim it had no idea what was going on because it tasked some software program with doing the first pass.” But … Meta did have no idea what was going on, in the sense of having specific knowledge. This really is a decision to allow general knowledge to count for liability; it penalizes Meta for having automated review instead of no review. Given that human review at Meta scale is … let’s say unlikely … then allowing this generalized knowledge to count is another blow against large online services generally.

The court also found that it was plausible that Meta acquired knowledge that it was aiding and abetting a fraud “well before the ad passed through a review system.” “At the moment a scammer asked Advantage+ Creative to generate an ad using a celebrity, a secret chat room, and the promise of unfathomable riches, there is at least a fact question on whether Meta acquired knowledge that it was aiding and abetting a fraud.” After all, “even routine operations may constitute substantial participation if done with knowledge.”

Breach of contract: Meta didn’t impose a binding contractual obligation on itself to do anything, only a duty on its users not to pollute Meta’s platforms with scam investment ads. For similar reasons, alternative claims for promissory estoppel and for breach of the covenant of good faith and fair dealing failed.

But negligence survived for the reasons above.

California’s Unruh Act provides that all individuals, regardless of race or national origin, shall be “entitled to the full and equal accommodations, advantages, facilities, privileges, or services in all business establishments of every kind whatsoever.” Plaintiffs alleged that Meta’s advertising tools targeted ads featuring celebrities and investors that shared their race or national origin to make it more likely that they’d engage with the ad and succumb to the scam. “In general, a person suffers discrimination under the [Unruh] Act when the person presents himself or herself to a business with an intent to use its services but encounters an exclusionary policy or practice that prevents him or her from using those services.” Targeting is not exclusion, so there was no violation.

Unjust enrichment also survived.


No comments: