Friday, January 21, 2022

pleading falsity when ads use peer-reviewed scientific study

Guardant Health, Inc. v. Natera, Inc., 2022 WL 162706, No. 21-cv-04062-EMC (N.D. Cal. Jan. 18, 2022)

Guardant sued its competitor Natera over an alleged “campaign of false and misleading advertising directed at” Guardant’s new product Reveal, a liquid biopsy cancer assay for early-stage colorectal cancer (CRC). Natera then filed amended counterclaims alleging a “campaign of false and misleading commercial statements regarding the performance of [Reveal].”

Apparently, a “detailed factual background” can be found in the court’s sealed order denying Natera’s motion for a preliminary injunction, but you and I can’t know it.

The parties offer competing diagnostic tools for CRC—Guardant’s “tumor-na├»ve” Reveal and Natera’s “tumor-dependent” Signatera assay. Guardant bases its contentions that Reveal works on “[p]eer reviewed data published by Parikh, et al., in the journal of Cancer Research.” Thirty-eight of the 43 authors who undertook the study are affiliated with Massachusetts General Hospital and the remaining five authors are Guardant personnel. 

The Parikh Study evaluated if tests such as Reveal, can detect circulating tumor DNA “with clinically meaningful specificity and sensitivity.” (Specificity: true/false negatives; sensitivity: true/false positives.) The Study allegedly “shows that Reveal offers 91% recurrence sensitivity (i.e., ability to identify which patients will recur based on ctDNA detection) and 100% positive predictive value for recurrence (i.e., all patients Reveal identified as having a ‘positive’ ctDNA test result later recurred).” Of 27 patients who recurred and were counted, Reveal detected ctDNA in 15 of them, resulting in calculated sensitivity of 55.6% and specificity of 100%. After “incorporating serial longitudinal samples” the sensitivity for recurrence prediction improved to 69% and after incorporating “surveillance” samples the sensitivity improved to 91%.

Natera challenged an email from Guardant’s sales team to physicians around the country that said:

“Reveal has higher specificity than CEA [carcinoembryonic antigen tests, which are the current standard of care] in the surveillance setting;

Reveal has a 91% sensitivity in the surveillance setting;

Reveal’s PPV [positive predictive value] is 100% and can have benefits in patients with stage 2 colorectal cancer, including identifying patients who may benefit most from adjuvant therapy;

and Reveal has a greater lead time for detecting MRD [minimal/molecular residual disease] than current methods.”

The court denied a TRO because it was not clear that Guardant’s statements were literally false.

Since these were “clinically proven” claims, they could be shown literally false either by “attacking the validity of the defendant’s tests directly or by showing that the defendant’s tests are contradicted or unsupported by other scientific tests.” If the plaintiff can show that the tests, even if reliable, do not establish the proposition asserted by the defendant, “the plaintiff has obviously met its burden” of demonstrating literal falsity.

However, 9th Circuit precedent doesn’t directly address “whether the test for falsity is altered where the challenged statements relate to a scientific peer-reviewed study.” In ONY v. Cornerstone, the Second Circuit held that there was what this court described as “a safe harbor” for “conclusions from non-fraudulent data, based on accurate descriptions of the data and methodology underlying those conclusions, [and] on subjects about which there is legitimate ongoing scientific disagreement,” holding that these kinds of “statements are not grounds for a claim of false advertising under the Lanham Act,” where they were “presented in publications directed to the relevant scientific community, ideally in peer-reviewed academic journals that warrant that research approved for publication demonstrates at least some degree of basic scientific competence.” Scientists, not courts, should decide such disputes. But ONY didn’t extend the safe harbor to situations where the study at issue was “fabricated” or “fraudulently created” because if “the data were falsified, the fraud would not be easily detectable by even the most informed members of the relevant scientific community.”

The 5th Circuit further declined to apply ONY in situations “where the challenged statements are directed at customers instead of the scientific community.” As the court noted, “Advertisements do not become immune from Lanham Act scrutiny simply because their claims are open to scientific or public debate. Otherwise, the Lanham Act would hardly ever be enforceable ....”

The 9th hasn’t embraced the “deferential” ONY approach or the 5th Circuit’s gloss, but apparently the court here did rely on ONY in denying a preliminary injunction, though we don’t know its full reasoning. It apparently held that there were “compelling reasons to conclude that claims based on the validity of the Parikh Study—or any other peer-reviewed, non-fraudulent scientific study—are likely ‘non-actionable’ in the context of false advertising.” Still, the standard 9th Circuit approach was relevant at the pleading stage, where Natera didn’t have to show that the challenged statements were literally false, only that it was plausible that they were.

Natera alleged that the Parikh Study was based on fraudulent data and inaccurate descriptions of the data and methodology. The claims were plausible under either ONY or the the [governing?] 9th Circuit Southland Sod approach.

First, Natera successfully alleged that several statements were literally false because they were unsupported by the Parikh Study. [Details omitted, but Natera successfully alleged, among other things, that Guardant was using differing definitions of various key terms and mixing and matching results in unacceptable ways.]

It was also plausible that Guardant’s marketing statements falsely and misleadingly touted benefits of Reveal for “early-stage” CRC patients because the Parikh Study included at least 19% late-stage patients and did not make any conclusions specific to “early-stage” cancer patients. Although these claims didn’t cite the Parikh Study, it was the necessarily implied source because it was the “only possible source of such comparisons.” Guardant argued that a claim about Reveal does not necessarily have to be based on the Parikh Study because “[d]rug, device, and testing companies often rely on in-house testing and data-on-file.” But the Parikh Study was the only published study on Reveal, making it plausible that Guardant necessarily implied reliance on it. And it didn’t support those claims. For example, its claims about early-stage patients plausibly wrongly conflated early detection of recurrence after treatment with early-stage cancer.

Separately, Natera successfully pled that the study’s data and methodology themselves were fraudulent. It alleged that (1) the Parikh Study said it looked only at “patients with evaluable ‘surveillance’ draws, defined as a draw obtained within four months of clinical recurrence” but it included patients with draws outside of four months to improperly boost Reveal’s performance (its 4-month cutoff was applied only to false negatives, not to true positives, meaning that 7 of 9 false negatives were excluded and raising sensitivity from 69% to 91%); (2) it said that “ctDNA analysis was performed blinded to the clinical data” but Guardant’s internal documents allegedly showed that Guardant performed ctDNA analysis unblinded to the clinical data; and (3) it said that it was a “single-institution prospective study” but Guardant’s internal documents allegedly showed that Parikh provided samples for analysis by Guardant after the fact and that Guardant retrospectively conducted ctDNA analysis. This satisfied both ONY and Southland Sod. “Where false advertising claims allege that the study’s conclusions are based on inaccurate descriptions of the data and methodology, the claims can be grounds for a claim under the Lanham Act.” That would be less plausible if the study disclosed its potential shortcomings, per ONY.

Guardant argued that the study’s practices on (1) were ok, but the problem was that it didn’t disclose this methodology, creating a factual issue of deception. There were also problems with the study saying that it borrowed its “surveillance” methodology from a prior paper that in fact used a different definition. Guardant argued that even if the study misconstrued the prior study, mistake isn’t the same as fraud. “But the issue here is not whether Guardant made a mistake but whether the Parikh Study improperly failed to disclose its interpretation of the Reinert study’s methodology.”

And Natera sufficiently pled that the alleged falsehoods in the Parikh Study were attributable to Guardant. [It’s not clear to me why this would be required. If the study is garbage, then it doesn’t support the claims that Guardant made, which is all that’s required; knowledge of falsity is not an element of Lanham Act false advertising.]

The discussion of blinding is heavily redacted but, we are told, Natera sufficiently alleged that the Parikh Study fraudulently described its methodology as a blinded analysis when in fact Guardant used unblinded data and modified results to improve Reveal’s performance. So too with the description of the study as “prospective” even though Guardant allegedly manipulated methodology and data post hoc. Guardant’s dispute over what “prospective” means was a factual one.

No comments: