I will try not to repeat Eric Goldman’s trenchant analysis; our aggravation is shared. A few points I want to highlight:
(1) Implicitly acknowledging the need to reconceptualize the multifactor test in modern infringement cases, the court explicitly endorses the idea that courts are authorized to pick certain factors as dispositive and ignore others, depending on the situation. I think such a reconceptualization is the big project of our time for trademark theorists, comparable in its way to coming up with a theory that allowed courts to find infringement by noncompetitors. (We tried trademark use as a way to hive off categories of uses from the multifactor test en masse; that didn’t pan out, though various First Amendment-inflected theories are doing similar work for noncommercial speech. Courts have begun to understand that the internet isn’t what they first thought it was, and have declared various factors unimportant in various internet contexts, as the 10th Circuit does here, but what we need is a Pam Samuelson-like taxonomy that tells everyone what to do in the next case of innovation, instead of having a set of rules for domain names, a set for banner ads, and another set for keywords, which is where we are now.)
(2) Eric describes the court’s holding as being that clickthroughs are a proxy for a confusion survey, and I’d say it a bit differently: the court says that the theory of IIC is that consumers seeking 1-800 (which we know, the court says, because they searched for the term) clicked on a Lens.com ad while believing it was a 1-800 site and, though no longer confused when they arrived at a Lens.com site, were nonetheless diverted. The court acknowledges that we have no idea how many were confused when they clicked and how many were not confused but rather seeking a possible alternative to 1-800, but says that we do know the upper bound of the former number: the total number of clickthroughs, which was a tiny fraction of the impressions. Too tiny, indeed, to count as likely confusion even if all clickthroughs were the result of IIC. (As Eric points out, clickthrough rates are always very low; how a similarly low clickthrough rate could then support a possible finding of contributory infringement when an affiliate used 1-800’s mark in its ad text is left as an exercise for the reader.)
I teach trademark law, and therefore I’m required to despise IIC (except for true bait and switch in the physical world), and indeed I do. But note one way in which clickthrough evidence differs from survey evidence: clickthroughs come from people who didn’t just see the ad, but were interested enough to at least evaluate the advertiser out of all their alternatives. This is a higher level of engagement than usually required of survey participants, who must be likely/potential consumers and are usually just asked to examine stimuli (and maybe a distractor ad, sometimes) as if they were considering a purchase. Now, maybe this just shows that surveys are inherently artificial and distorting—and we should probably require more confusion than we do when the evidence is survey-based—but it’s interesting to me that the court is implicitly narrowing the universe of relevant consumers past what surveys do while simultaneously applying survey standards to the evidence in hand, and I don’t think it notices it’s done that. Big Data in action, changing what we can measure and therefore what we think is relevant?
(3) This case will also be cited for discussing the percentage of confusion in a survey that can support a finding of confusion, diving into detail on the case law (including the early outlier of Grotrian-Steinweg) and concluding that really good surveys showing net confusion of more than 7% can, in combination with other factors favoring the plaintiff, support a finding of likely confusion—but generally, 7% is too low without other evidence of confusion. Along the way the court notes that the import of older cases accepting surveys without controls is unclear—those prior findings based on X percent confusion were really X minus Y, where Y is unknown to us and now unknowable. 1-800 argued that those old cases favored it, because they showed that what must have been in fact even lower percentages of confusion could favor plaintiffs, but the court wasn’t going to accept that. Now that we require controls, the factual predicates of the old cases accepting what would now be Daubert-excluded surveys don’t make sense any more. Arguably we shouldn’t look at them for percentages, either, since they were mistaken about whether the surveys were reliable in the first place.