In re Keurig Green Mountain Single-Serve Coffee Antitrust Litig., No. 14-MD-2542 (VSB), 2025 WL 354671 (S.D.N.Y. Jan. 30, 2025)
This is a ruling on 19 motions to exclude expert testimony
in this case, which is mostly an antitrust case; I will focus only on some
false advertising-relevant rulings.
Keurig sought to exclude Hal Poret’s testimony, offered primarily
for the purpose of showing that Keurig statements misled consumers into
believing that its 2.0 Brewer worked only with Keurig’s K-Cups, and to provide
an additional basis for one plaintiff’s false advertising damages expert,
to rely on when estimating Lanham Act damages resulting from Keurig’s
incompatibility statements.
The court started with a presumption favoring the
admissibility of surveys. Poret didn’t test the exact language Keurig used, but
that wasn’t fatal. Although surveys
“must ‘be designed to examine the impression presented to the consumer,’ ” “there
is no obligation that the survey use the exact language challenged, or mirror
the advertising conditions exactly.” Instead, Poret interviewed consumers about
“what they did and why,” addressing the broader question of why consumers had
not purchased competitor’s single-serve cups. His choice not to show the
allegedly misleading ad campaign didn’t render the entire survey unreliable, since
his methodology was well accepted in the survey field.
It was also not fatal that the survey lacked a control
group. “Control groups are not the universal and inflexible requirement of
survey research as Keurig seeks to portray them.” They’re useful when the
survey is trying to determine the source of attitudes or beliefs or behaviors,
or to “test directly the influence of [a] stimulus” such as a commercial. But “a
control group may not be necessary if the risk of simply recording pre-existing
values is not as great. For example, “a control group is not required for a
survey that purports only to understand what developers perceive as relatively
more or less important factors in their decision-making process.” That was the
case here.
Likewise, Keurig’s arguments that the questions were biased
and leading were insufficient to affect admissibility. The questions were
closed-ended, but that can be legitimate. Certain respondents were asked to
choose from a list of reasons that they did not purchase unlicensed pods. Some
of these choices favored plaintiff’s position (e.g., “I heard or read that the
Keurig 2.0 brewer works only with Keurig brand or licensed pods”) but some did
not (e.g., “I prefer the taste of Keurig or Keurig-licensed brands.”). “Determining
consumers’ preferences on these kinds of clearly defined alternatives is the
kind of task for which close-ended questions are frequently more appropriate.”
The court also rejected the criticism that the universe was
unrepresentative because Poret “imposed near-equal age distribution within his
sample survey,” creating an underinclusive universe of respondents whose ages
matched neither the population of Keurig users nor the population of the United
States. The survey population of interest was Keurig 2.0 Brewer owners, which
was appropriate.
Another plaintiff expert was Sarah Butler, who was offered
to testify both on Keurig’s testing of competitor cups and her own surveys.
Keurig objected to the first, because it argued that her “training is in
consumer surveys, not laboratory testing of physical products.” But she was
qualified to opine on whether Keurig’s comparisons between K-Cups and
competitive cups adhered to “specific research standards and methodology.”
Butler was an expert on survey research, market research, sampling, and
statistical analysis. Her evaluation related to research standards and
methodology generally, and not merely to “product testing,” and thus she was
qualified to opine on whether the methodology of a research study allows for
statistically valid conclusions to be drawn. Her non-survey testimony concerned
whether the cup testing conducted by Keurig followed “generally accepted
research standards for comparative product tests—such as objectivity,
sufficient sample size, use of control groups where appropriate, and testing
protocols—necessary for reliable statistical analyses,” and therefore aligned
with her experience, training, and expertise.
As for her surveys—one of home users and one of out-of-home
users like office users—the court also allowed them. For the home users, Butler
made adjustments to the control group to ensure that “respondents who had
previously been exposed to Keurig’s false advertising campaign were controlled
for.” In devising her survey, Butler noted her concern that “the rates in the
Control group may be driven by past exposures to statements made by Keurig
about the unreliability of Competitive Cups.” “Mitigating the impact of
preexisting beliefs on survey feedback is a sound objective in survey research.
Indeed, failure to control for the impact of preexisting beliefs can render a
survey unreliable.” Where a control group without preexisting beliefs is
unavailable, “social scientists sometimes employ statistical weights or
adjustments to the control groups. … Given the threat that preexisting views
pose to survey validity and the broad use of far more intensive methods of
control group weighing in modern econometric methods, I do not agree with
Keurig that there is no scientific justification for Ms. Butler’s modification
of the control group.”
For the out-of-home group, Keurig argued that there was no
control group at all, but that survey targeted “individuals responsible for
beverage supplies or contracts with beverage suppliers for their office or
business location to evaluate the impact of Keurig’s relationships with
Distributors[ ] on purchasing behaviors in the Away-From-Home Market.” Thus, it
didn’t seek to test the impact of a particular stimulus or statement on these
individuals, and a control group wasn’t as necessary.
Keurig also objected to questions in the first survey that
it argued created a false dichotomy between “licensed” and “unlicensed” pods as
well as the use of words like “unapproved” that it deemed biased, along with stronger
warranty language than Keurig itself used. Keurig’s rebuttal expert conducted a
survey along its proposed lines which yielded substantially different results.
The court disagreed. Keurig’s own materials used
“unapproved,” so it was fair to ask consumers about that, and the other
questions didn’t suggest answers in an impermissibly leading way. Although
there were differences between “affecting” a warranty and “voiding” a warranty,
“none are so strong that exclusion of the survey is warranted or that it
becomes more likely than not that Ms. Butler’s opinion is unreliable. Although
Keurig’s survey produced different results, this is to be expected—surveys
conducted in different ways produce different results.” Cross-examination was
the remedy.
The court also rejected sample-based criticisms of the
second survey, noting that samples don’t have to be perfect.
Keurig also criticized the use of “recall-based measures” (i.e., questions about past purchasing decisions) in the first survey, on the grounds that “[i]t is widely recognized that recall-based measures do not yield reliable responses.” “[R]ecall bias, which recognizes the potential for inaccurate responses due to fading memories over time,” is a known issue with survey reliability. But that went to weight rather than admissibility. And asking about aggregate decisions over a long term is less problematic than asking about very specific things. Likewise, adding an “I don’t know” option can mitigate the problem, which was done.
No comments:
Post a Comment