Wednesday, April 22, 2026

Bayer can't enjoin J&J's cancer superiority claims by showing methodological disputes

Bayer Healthcare LLC v. Johnson & Johnson, Inc., 2026 WL 1045917, No. 26 Civ. 1479 (DEH) (S.D.N.Y. Apr. 17, 2026)

The court denied Bayer’s request for a preliminary injunction against its competitor J&J’s advertising of a drug used in the treatment of metastatic castration-sensitive prostate cancer. In a presentation and a press release, J&J described a retrospective observational study that purportedly showed a roughly 50% reduction in the risk of death for patients prescribed its drug, apalutamide (ERLEADA), compared to Bayer’s drug, darolutamide (NUBEQUA). Bayer alleged severe methodological flaws rendering J&J’s claims literally false or false by necessary implication in violation of the Lanham Act and NY state law.

The court found that Bayer failed to show methodological errors substantial enough to render J&J’s claims literally false or even misleading. Instead, J&J accurately described the results, the methodology, and the study’s limitations.

Super interesting methodological questions (but possibly much more appropriate for doctors to debate than courts): Bayer argued that studied patients receiving its drug were mostly prescribed it off-label (given the study period); that such patients would generally only get an off-label prescription when patient-specific issues warranted avoidance of the on-label options (J&J’s) already on the market; and that J&J’s product’s side effects made it risk for patients with seizure history, fall and fracture risk, independent treatment with anticoagulants, general frailty, or other comorbidities, whereas Bayer’s product wasn’t associated with those side effects and thus the uncertainty of off-label use was justified for them. Thus, patients prescribed Bayer’s drug would disproportionately have these other conditions, which were already associated with higher mortality, confounding any association based on the drugs themselves.

Likewise, Bayer offered testimony that its drug was prescribed to patients who were seen as possibly needing chemotherapy at some point because at least some doctors thought it was the better treatment option for patients receiving chemotherapy. But, Bayer argued, such patients were likely to be suffering from a more advanced disease or otherwise more frail, thus introducing further bias in the respective study populations.

J&J had responses, including that the off-label prescription of Bayer’s drug was “ubiquitous[],” in Bayer’s own words, at the relevant time; and that patients must have a baseline level of health to receive chemotherapy, so possible chemotherapy was not a sign of significant frailty. J&J also presented testimony that its statistical controls adequately accounted for any potential bias from differences in the treatment cohorts by controlling for age and other comorbidities. “Bayer’s experts admitted that their criticisms regarding the treatment cohorts were essentially hypothetical, because they had no empirical data showing that off-label darolutamide doublet patients were sicker, more frail, or more likely to have non-cancer comorbidities than on-label apalutamide patients.” At this stage, Bayer failed to show that study patients who received its drug were sicker than patients who received J&J’s.

Bayer’s attacks on the control methodology also failed. J&J’s expert testified that the necessary magnitude of an unmeasured confounder “to explain away the [51%] observed difference found in the study” would be “enormous”: to “explain away” the observed difference across cohorts, unmeasured confounders would have to simultaneously make a patient 350% more likely to receive darolutamide and 350% more likely to die. That would be a stronger relationship than that between heart disease and smoking. Bayer didn’t rebut this.

Bayer also criticized the underlying data sources of the study. “For example, in one Bayer study, as many as 40% of patients that initially appeared to be eligible to be included in the study based on [the data source used] were, in reality, ineligible once researchers examined the patients’ underlying charts.” But Bayer has used the same datasets in the same way in their own retrospective studies on multiple occasions. In addition, both the conclusions slide of the PowerPoint and the overview slide of J&J’s presentation acknowledged the possibility of data errors, acknowledging possible “misclassification bias” and “that not all death or treatment data [were] captured” and that, because “the study used clinical records, some information may be missing or incorrect.”

Nor did Bayer’s attack on the “overall hazard ratio” reported by J&J succeed. “A hazard ratio is generally accepted as the standard method of reporting comparative survival results for oncology studies. The measured ratio here is 0.49, meaning a patient being treated with apalutamide was 0.49x as likely to die during the observed period as a patient receiving darolutamide. Thus, the Study’s top line result stated a 51% reduction in the risk of death between the cohorts, ‘another way of saying the same thing.’” Bayer argued that it was inappropriate to calculate a hazard ratio calculated over the 24-month study period. “Because a hazard ratio presents a single measurement for the entire period, where outcomes may differ over time, a hazard ratio may over- or understate the likelihood of an event at a given moment.” But this was “a generally-accepted method for reporting retrospective comparative study,” Bayer had used the same reporting methodology in its own research. Bayer presented no statistical analysis to estimate varying hazard ratios using different time periods.

So much for the challenges to the study itself. Did J&J’s statements misrepresent the methodology and results? There were no consumer-facing advertisements at issue, but Bayer argued that the press release was picked up by search engines and AI-generated results to answer general public questions, and offered evidence that patients can often influence prescribing decisions.

But J&J’s evidence suggested that only doctors, not patients, were the target audience for the challenged communications. Two treating physicians testified that they were not aware of a single instance of a patient identifying either drug during an appointment, and in this particular context, it was highly unlikely that a patient would be driving a treatment decision.

51% risk of death reduction: Study patients receiving Bayer’s product had a roughly 86% survival rate, while those receiving J&J’s apalutamide had a roughly 92% survival rate—statistics that are disclosed in the overview slide. Bayer argued that the public seeing “92.1 percent for J&J’s product and a ‘51 percent reduction in risk of death’ would plausibly infer that Bayer’s product has a survival rate of approximately 60 percent.” (Why not 46%?) But failure to include the 86% absolute survival measure didn’t misrepresent the results, and J&J used sufficient disclaimers. “It would be obvious to any medical practitioner that a hazard ratio reflects a relative, rather than absolute, difference.”

Bayer also challenged the use of the claim that J&J’s product “reduces” mortality risk rather than merely being “associated with” decreased mortality. This was closer: “associated with a reduction in X” would be a more apt description of the results of a retrospective, observational study like the one here, whereas the causation implied by “reduces” generally can be shown only through a randomized trial. But the word wasn’t literally false for the target audience. “Bayer failed to present any evidence that doctors would not understand the press release’s headline claim in light of the release’s repeated references to the real-world and observational nature of the Study.” And J&J’s witnesses “repeatedly emphasized that doctors would look closely at the underlying study rather than relying just on one word in a headline.”

Bayer also challenged the use of the phrase “through 24 months.” Many patients were “in” the study for only a portion of that time and therefore were tracked for a shorter duration. But “through 24 months” accurately (and literally) describes the period in which patients were included in the study, and there was testimony that a reasonable doctor would recognize that it was impossible that every patient in the study was followed for a full 24-month period. For example, patients died during the period. “Readers familiar with health outcomes studies understand that the stated follow-up period is not universal.”

Bayer also challenged a press release’s statement that the study “replicat[ed] the conditions of a randomized clinical trial.” True, retrospective observational studies are generally inferior to randomized trials. In isolation, this statement could be misleading, but not in the full context. Disclosure of the underlying methodological approach, including noting that the study was a “real-world” study rather than a randomized clinical trial at least 14 times throughout the press release sufficed. While “no observational study can actually duplicate the effect of a randomized trial,” “the audience of medical professionals to whom the communications were targeted would know that.”

The court also referred to the Second Circuit’s decision in ONY, which held on relevantly similar facts that, “to the extent a speaker or author draws conclusions from non-fraudulent data, based on accurate descriptions of the data and methodology underlying those conclusions, on subjects about which there is legitimate ongoing scientific disagreement, those statements are not grounds for a claim of false advertising under the Lanham Act.”

J&J argued that, under ONY, Bayer had to prove that the study was based on fraudulent or false data, or that J&J had falsely described the underlying methodology, but the court wasn’t quite willing to go that far. ONY dealt with statements made “in a scientific article reporting research results,” and also in “a press release touting [the article’s] conclusions.” Other courts have declined to grant broad immunity to “statements made outside of an academic context.” The court also pointed to a series of opinions standing for the proposition that “statements about a study’s results may still be challenged as false under the Lanham Act if the underlying study can be shown to suffer from severe methodological defects such that the study cannot be said to support the statements in question.”

The court didn’t need to resolve the issue, because Bayer couldn’t win under either standard: fraud or showing that the study compared apples to oranges. (It did comment that ONY involved not just a paper but a press release, and that it wasn’t clear that “the extent of First Amendment protections for statements of scientific research deemed applicable by the Second Circuit in ONY could properly be limited to academic fora.”)


No comments: