At a post-argument discussion, the EFF's lawyer pointed out some important things: (1) in the past, Booking.com has argued that "booking" is legally identical to booking.com for purposes of tacking. While it may have gotten religion on the narrowness of its claimed mark now, the lateness of its conversion is a bad sign for the future--and for what other claimants in .com marks may assert, especially in contexts like C&D letters that are harder to regulate. (2) The Freecycle Network, after losing a genericity battle on "freecycle," apparently obtained a trademark registration for Freecycle.org and is now using that to argue that Facebook groups--which only use the "freecycle" part and not the .org--are infringing. So the idea that the matter behind the dot is some sort of constraint is unlikely to hold.
I still can't get any comfort with Booking.com's position that there is nothing that is definitively unregistrable--applicants would always get to argue that things have changed in the market. And the fact that the PTO doesn't do its own surveys (not to mention the Federal Circuit's pro-applicant interpretations) means the PTO is structurally disadvantaged in dealing with survey evidence.
Speaking of surveys, I appreciated the government's focus on the "washingmachine.com" example in the survey. This was a group of respondents who'd been trained on the generic/nongeneric distinction; they'd successfully distinguished Kellogg/cereal in the screening question, and none of them got supermarket wrong--no one was even unsure about its status. Yet even among this trained group, 33% thought washingmachine.com was a trademark, and an additional 6.3% weren't sure. When over 1/3 of the qualified survey participants get the answer that all the lawyers agree is wrong, the survey is not asking ordinary people a question they are in a good position to answer: surveys may simply be the wrong form of evidence.
No problem, Booking.com says: just remove those nearly 40% of people from the analysis, and they still show high secondary meaning in the remainder. I see a couple of problems with that approach: (1) Those people don't disappear from the market. Ordinarily removing a small percentage of people who flunk the survey's integrity checks (whether out of deliberate choice or misunderstanding) doesn't substantially change the population of interest. Here, it pretty clearly does. It's like saying "sixty percent of geometry students got this question right, therefore more than half of math students got it right." (2) Relatedly, these people were deemed qualified by the training/screening questions that supposedly assessed understanding of the relevant distinction; to remove them now smacks of result-oriented manipulation. (3) The justification for doing this given in the surveyors' amicus brief is that a genericity evaluation is noncausal, so no control is necessary. I don't get that. We are interested in whether it's actual secondary meaning or conflation of .com with trademark status that triggers consumers' response "this is a trademark," which seems pretty causal to me.
That being said, I really do wonder what a screening question that had been washingmachine.com/Amazon.com would have done to the results. And I wonder: if that produced a high percentage of initial disqualification of respondents, how should we think about that fact? Trademark law is not a purely empirical endeavor!
No comments:
Post a Comment