Medisim sued BestMed for patent and copyright infringement,
unfair competition, false designation of origin, false advertising, deceptive
acts and practices, unfair competition, and unjust enrichment. Basically, BestMed replaced Medisim as the
supplier of house-branded digital thermometers to Rite Aid. I am ignoring the utility patent aspects of
the case. This opinion dealt with
motions to exclude various kinds of expert testimony, and the plaintiff’s
consumer survey did not survive Daubert.
Warren Keegan, a professor of marketing with an extensive
resume, conducted an internet survey in which each respondent (who’d been
screened to be likely chain drugstore customers and users of digital
thermometers) was shown a picture of a product and directed to “take as much
time to look at [it] as you would if you were considering purchasing it.” The test cell saw pictures of Medisim and
BestMed thermometers in Rite Aid packaging, while the control cell saw Medisim
and a third-party brand digitally altered to look like it was in Rite Aid packaging.
Respondents were asked if they thought the two products were
manufactured by the same company, or by different companies; then if they
thought the two products were manufactured by companies that were affiliated,
connected, or associated with one another.
A yes on either was coded as likely confusion. Eighty-three percent of test cell respondents
and 52% of control cell respondents showed likely confusion, for a net level of
31%.
Questions about survey reliability generally go to weight
rather than admissibility, but surveys can be excluded entirely under Rule 702
when they’re invalid or unreliable, and/or under Rule 403 when they’re likely
to be insufficiently probative, unfairly prejudicial, misleading, confusing, or
a waste of time.
BestMed argued that Keegan used the wrong universe and an
improper control product. In addition,
BestMed argued that Keegan biased the respondent pool by telling them that
“there is often a relationship between a retail store and its source
manufacturers,” while Medisim argued that this instruction merely “correct[ed]
for the possibility that some respondents might have been unaware of the
potential relationship between the retailer and manufacturer.” The use of a control group is the “gold
standard” for dealing with such pre-existing beliefs (or ignorance, I
guess). “A carefully crafted instruction
may have a similar effect, albeit in a more subjective way.” But where the instruction is given to the
control and test cells, its effect will wash out if the control works appropriately, so the court wasn’t convinced
that the instruction created improper bias.
The court agreed that the respondent universe was
improper. Point-of-sale confusion should
be examined by surveying potential purchasers.
Screening questions that ensured that respondents were likely to shop at
stores that sold the parties’ thermometers were not enough to ensure that they
were likely purchasers. I have sympathy
for Medisim’s argument that “a digital thermometer is not a major planned
purchase ... for which a survey could easily locate individuals who are ‘in the
market,’” so shoppers who were also users were a good proxy. But the court held that “a party may not
simply excuse itself from surveying the relevant universe of respondents
because it is difficult to assemble an appropriate sample of that population. In addition, repeat purchases of digital
thermometers are relatively infrequent.
(Which also seems to doom secondary meaning for the trade dress; who
would ever learn which was which?) Thus,
the logical assumption is that current/recent users of digital thermometers are
unlikely to buy another within a reasonable timeframe. Without knowing when they bought their
devices, there was no way to tell if they were similar to true potential purchasers. Keegan suggested that shopping at the
relevant stores provided familiarity with the products, but such stores carry
thousands of unrelated products. “Amidst
this deluge, there is no basis to equate the knowledge of a person admittedly
not shopping for a given product with that of a potential purchaser.” The respondent universe was a “crucial step”
in a survey, because even if the proper questions are asked, the results are
likely to be irrelevant when they’re asked of the wrong group.
In addition, the control design was flawed. BestMed identified two problems. First, the control didn’t really exist in the
marketplace, and had few similarities to either party’s product. Second, because Keegan didn’t specify which
features of the Medisim packaging he was testing (that is, what was
protectable), it was impossible to tell what generated the reported confusion. A party can seek protection for a product’s
overall look, but it still has to articulate which specific elements comprise
its trade dress; this is necessary to avoid de facto protection for legally unprotectable
styles, themes, or ideas. The court
focused on the second problem: Keegan didn’t explain which elements of the
packaging (or product) were protectable trade dress, and Medisim just said the
control shared “certain characteristics” with the test product. The court found these failures “deeply
troubling, and indicative of a serious flaw in the design of Keegan's
survey. Furthermore, I am unable to
determine whether Keegan's control was appropriate without understanding the
scope of the claimed protection.”
Neither flaw would justify excluding the report on its own,
but taken together, since each went to a fundamental element of the survey, the
combined impact was “too significant to overlook under Daubert
and Rule 702.”
No comments:
Post a Comment