Saturday, April 13, 2013

Trademark Scholars' Roundtable, part 3

Session 3: Registration: Research Directions

Jeremy Sheff: How does registration work?  Legal dimension and institutional dimension.

Legally, what are the determinants of registrant outcomes? Applicant characteristics; subject matter characteristics; examiner characteristics; procedural characteristics.  Empirical analysis can reveal doctrinal lessons.  For institutional aspects, it may be that empirical methods are useful, but not the exclusively useful approach. We might look at the PTO and its sister institutions around the world as administrative agencies, using legal analysis and political science. Want to know as lawyers what factors influence agency behavior. Could be data- or theory-driven.  Are institutions talking to each other?

PTO dataset is huge; goes back to 1870. Depth of data changes over time—very few variables at the start, adding more as they go on. Makes conclusions/historical comparisons difficult. Especially since we have only data for successful applications prior to 1982.  And even registered but cancelled marks before some date may not be there.

Data are fairly useful for procedural determinants of registration—was an office action issued? Opposition filed? Statement of use filed? Renewal filed? Lacks information on grounds for refusal to register; this exists for more recent refusals, but unless it went to the TTAB the grounds for an examiner’s office action won’t be apparent. You can however identify the case file numbers and then look at the examiner’s statement, which is much more time consuming.

Can do gross analysis, as Beebe & PTO economists have done. Useful information there, including renewals etc. But also a roadmap to identify subpopulations of interest.  There are less than 500 nonvisual marks in the PTO data. So if you wanted to know the history of nonvisual marks in the US, you could look through each case file as with judicial opinions.

Could do the same thing with samples, not just subpopulations.  He’s eager to do things like look at dilution’s effect, if any.

Barton Beebe: Many things in data. Can look at examiner publication numbers—how many, and the rate at which applications go to publication per examiner.  There are many whose rates are between .6 and .8, and then some people at the very extremes—some who have reviewed 1300 applications and published every one.  He has their names, and you can too.  Can see which applications these reviewed. Also those who have .03 publication rate, but there are only a very few here at the edge. Maybe these are the people to whom troll files or smell marks are sent.  He suspects something’s going on with the data.

One examiner has reviewed over 27,000 applications over a career. But they are generally closely bunched, which may lead one to believe that there’s a standard/rule of thumb of taking 3 out of 4, given the publication rate of 76%.

Deborah Gerhardt & Jon McClanahan on whether TM lawyers matter: success rates of attorney-filed v. pro se.  Overall publication/success rate is very high, but registration rate drops a lot for ITUs; lots of statements of use aren’t filed.  These data are more recent.  Legal counsel: 82% went to publication. 60% for pro se applicants.  No regression—may be other variables (per question by Bob Bone)—shows both the promise and the limitation of the data.  Final registration rate is 60 v. 40%. 

Number of filings made by attorneys v. pro se: You can see experience increases success over time, even with pro se applicants. Who are these pro se applicants who are filing more than 30 applications?  They may be businesses; the presence of “name of attorney” in the field was the only way to sort.  Twentieth Century Fox Film Corp. and Hasbro along with many others count as pro se applicants because if it’s filed by in-house counsel there’s no “attorney of record” in that field.

McKenna: makes the findings all the more startling if these entities are “pro se.”

Beebe: impact of office actoins: for those who hit a roadblack, there’s much more success overcoming office action—45% and 72%.

Dogan: these may be top 20 “pro se” filers—but how many are there really?  How representative are Fox and Hasbro?

Leaffer: pro se applicants tend to be really bad at doing trademark searches.

Beebe: this might be a place where you have to look beyond the dataset and at the files.

Nontextual applications—whether the mark is block capitals or stylization v. no words at all.  Purely visual marks spike up over time—at the peak of the internet boom, almost 9000.  ITU addition messes up the data, but without ITUs there’s a relatively steady acceptance rate for publication.

1982 is the start of reliable data. (Litman says you can’t get the case file that far back.)  It’s big and in chunks that are not completely compatible. Must sample to learn about grounds for refusal, which isn’t in the dataset.

ITU publication rates are similar to use-based, but registration rates are much lower. 18% of overall applications proceed directly to publication, with higher rates for use-based and lower for ITUs.

Kur: OHIM data go only back to 1996, when it started operating. Should have pretty complete information, except that you won’t get names—not of examiners and not of firms that filed!  Privacy law prevents that. Demand for Community Trade Marks is increasing.  Time to process is decreasing. It seems that acceptance rates are decreasing but still very high.  OHIM data don’t give you much life cycle information.  Need more information about cluttering.  Some data: Applicants try to register many marks at the same time.  Applicants have increasingly large portfolios; acceptance rates grew over time with experience.  Oppositions have fallen significantly. Larger applicants are more likely to register and to face opposition.

Majority of attorneys said there was cluttering problem—marks registered that are not in use. While proprietors said there wasn’t a problem.

Beebe: Look at the register; now get a sense that every word has been trademarked. A, AA, AAA, and so on.  But of course that doesn’t get to product categories.

Goldman: but domain names collapse product categories.

McGeveran: Stephen Carter made the point that Landes & Posner are only right if there’s an inexhaustible supply of marks, which there probably isn’t.

Goldman: another thought for research is to compare domain names to TMs.

Beebe: other studies—global Southern TM offices are essentially just take crossborder flows from the global North, whereas global North TM offices receive very few crossborder registrations from the global South.

Dogan: how were files allocated to examiners?

Leaffer: in the 1970s, examiners were given International Classes.  Some were more “serious” than others. 

Dogan: other questions involve the rate of registration for designs after Wal-Mart—when there was no evidence of secondary meaning; could people change their filing practices with evidence of secondary meaning?  But the dip in acceptances followed Wal-Mart (Litman says the SCt might be following instead of leading, but that strikes me as unlikely in this case; the discourse around Wal-Mart was all about refining the test for inherent distinctiveness for product design, along the lines of the test the PTO used, not about rejecting it).

McKenna: hard to imagine an event change study showing change from a court case (though some discussion of effects on TMEP)—because the institution seems likely to resist change.

Burrell: has seen court cases produce change in office practice—example of wine and beer being held to be similar goods. Until it’s in the manual, though, submitting the case to the examiner is not very helpful.

Sheff: could test versions of TMEP; the old ones are still available.

Goldman: ideology of statistics. People listen to percentages of applications approved in a way they don’t listen to discussions of trademark use. Realpolitik: able to steer the conversation in ways more powerful than those derived from theory. Whether it shines a light on true issues may not matter as much. 

On privacy interests of applicants: thinks there are none.

McKenna: litigation’s effect: might be hard to see just because you can’t tease apart effects on the parties’ own behavior in submitting applications.

Generally discussed: Many other possible uses in scholarship now that the data are available.  Slicing and dicing: trade dress; cancellations that are later revived with new applications; etc.

No comments: