Tuesday, June 16, 2015

What Is the Academy’s Role in Evidence-based Policy Making for Intellectual Property?

Hoover Institution & USPTO
 
Welcome:        Shira Perlmutter, US Patent and Trademark Office
Initial Edison Scholars to help with evidence-based policymaking—Peter Menell (claim construction) and Jay Thomas.  Jay Kesan: harmonization and cross-country comparisons of patent examination.  In 2013, Congress and WH became interested in patent reform, and PTO expanded the Edison program to study specific issues bearing on those topics—patent litigation and potential abuse. Jonas Anderson: classifying and evaluating claim terms; Joseph Bailey—refining prior art search through machine learning.  Deepak Hedge, to follow. We also brought in Graeme Dinwoodie to look on whether TMs define rights to existing usage or to economic expansions. Josh Sarnoff—continuation practice.
 
The Edison Scholar Program at the USPTO: Results and Contributions
Moderator:      Tim Simcoe, President’s Council of Economic Advisers and Boston University
 
Panelists: Joseph Bailey, University of Maryland
Spent 9 years as patent expert witness. Opportunity for understanding office actions and what examiners are thinking from the inside out. His research: how examiners can look at growing corpus of prior art through machine learning.  Accelerating pace of innovation.  Gone from patent no. 7 million to no. 9 million in 10 years—used to take 100 years. Number of examiners hasn’t kept up.  Algorithms to grow and learn from examiner activity.  Examiners have become their own thesaurus—what other terms to search to find relevant literature.  Can we imagine algorithms working on behalf of examiners, or modifying algorithms taking best practices into consideration? Able to get cooperation from examiners/union.
 
Examples of search strings used by examiners—truncating or stemming words and synonyms: (website “web site” webpage “web page” internet online) etc.  You may end up with thousands of results even after a search that’s as refined as you think possible.  By saving the search strings, words used in applications can be stemmed and lemmatized, using examiners’ thesauri, to do that for them.  So then we develop a nested thesaurus, where words have meaning w/in a particular context and might differ in a different context.  (Folksonomy? I’m not enough of a librarian to understand the relationships.)
 
Then, text mining of corpus of 400,000 granted patents from 2005-2014. By looking at use of words and # of instances of use, we can apply thesauri, compare results, and filter for nearest neighbors.  Prior art may be far away or close, and we need to decide how near it needs to be. We don’t want to present examiners with all the best results, but rather a distribution of different results that come from different clusters of the patents.  Consider the closest but also these other clusters, which may lead to different insights.
 
Supervised learning: the results are used or not used in office actions by examiners.  Automated pre-examination search?  Patent Office’s March quality summit ended up with a proposal to investigate pre-examination search to be sent to an examiner upon filing of an application.  [Relation to whether prior art is cited by the applicant? I understand that examiners don’t cite applicant-submitted prior art very much.]
 
Deepak Hegde, New York University: Aim: Getting smaller inventors to specialize in invention, which they can then sell to commercializers.  But media coverage is that patents are used by trolls.  Growth rates for patent litigation higher than growth in # of patents.  Some experts lay part of the blame on the examination system.
 
Goal:establish systematic facts about quality and speed of examination.  We don’t know what the idea grant rate is; depends on quality of applications. So looked at changing trends.  Substantial variation in allowance rates across times—high of 80% in 1998 to around 45% by 2010, creeping back up.  Is this a function of changing examination standards?  Patent pendency more than doubled between 1991 and 2010, coinciding with decrease in allowance rates. 
 
Used a smaller sample to create the models.  Several factors increase examination delays and decrease allowance rates significantly. Most seem inversely related: factors that depress allowance rates increase time taken to issue final decision.  Proportion of senior examiners: more senior examiners = allowance rates up but pendency rates down.  Stock of pending applications: as the burden increases, increase pendency times and decrease allowance rates. Some factors work together: applications filed by small entities are more likely to have terminal decision more quickly, but lower rate of allowance. One reason: as you delay application process, smaller entities are more likely to abandon. Number of claims: increases grants and probability of delays. 
 
Covariates, over which PTO has no control, explain 70% of the variance in allowance rates, while year effects explain an additional 10% (could be changing examination standards). 
 
Measures of quality: if an examiner makes a decision, if that decision is subjected to a second round of scrutiny, what is the probability that the decision will be upheld? Type 1/Type 2 errors: patents taken to court and invalidated; applications rejected but allowed by BPIA/PTAB.  Examination errors do not show an increasing trend.  They seem to be going down/more or less flat with time.
 
Increasing delays aren’t necessarily bad; gives time to applicants to figure out whether patent is worth investing in for themselves.  Calls for reform based on allegations of rubberstamping aren’t accurate. Litigation is driven by value of property rights but also by their contestability.  Litigation might be increasing because of increasing value of property rights; not the contestability which isn’t increasing. Limited resources: PTO might invest more in reducing errors than in reducing time to final decision, because time helps applicants self-select.  Further research: examine effects of PTO internal managerment; examine role of patent publication in reducing errors; examine role of patents in securing investment capital.
 
Joshua Sarnoff, DePaul University: What happens to patent scope during prosecution?   Test minimum and average independent claim lengths, and independent claim counts, with idea that length is a measure of how narrow the claim is, and so change in scope can be measured by change during application pendency. Same w/claim numbers: more claims, broader patents. Measured subgroups of technologies, and measured against pendency.
 
Applications that are ultimately issued and applications that are ultimately abandoned—number of words in smallest independent claim shows that patents that go abandoned after publication tend to have more claims that have fewer words. Broader claims/fewer words are less likely to get granted.  Change length to grant—claims at publication, for those claims that will ultimately get issued—as you go from application to grant, entire distribution gets pushed out, suggesting that prosecution is narrowing claims by expanding claim length.The tail spike of very small claims gets completely eliminated in the grant lengths.
 
Claim count: claims before applications that are abandoned tend to have fewer independent claims, particularly one independent claim. When you move between publication and grant, you see a higher density of single independent claims—dropping claims/narrowing scope as you go forward.  Doesn’t tell us much about continuation practice though. 
 
Similar results broken up by type of industry.  Chemical/drugs/medical: you have many more shorter claims than other fields b/c they often claim single chemicals.  Claim counts and continuations: claim counts go down by about .5 each round. Makes some sense, though it doesn’t answer ultimate question of validity. Continuation practice doesn’t let the exact same claims survive multiple rounds.  Will publish datasets and summary analysis; will be running further regressions and hope to match against validity indicators.
 
Points out that it is difficult to record all this data—we’re asking examiners to do a lot.
 
Q: what’s the overall state of the academic literature about the patent system?
 
Bailey: there’s incredible talent at the PTO.
 
Sarnoff: fairly good doctrinal scholarship; law and economics began penetration of empirical methods, increasing significantly. Clamor for more empirical analysis that is very hard to do.  Part of the reason it’s hard to do is that we need political decisions to collect the data, which has costs.  If the political will is there, you’ll see even more empirical analysis—could be in courts too.
 
Simcoe: Where costs of data access are low, you’ll see a lot of scholarship, most of it not great and some fantastic.  We’ve had a lot of studies over past 2 decades looking at existing patents changing the institutional regime—valid to invalid, price to free.  Input demand slopes down: when it becomes more available, it’s more likely to get built on. We have much less info on up-front incentives—first invention in the chain and whether/how patent stimulates that.  Even with all the data we want that might be difficult.
 
Q: ex ante, how do you measure value of patent?
 
Hegde: more a mental model—litigation increases with expected benefits, which are a product of value expected x chance you will win if you litigate.  (x risk tolerance, for example when someone adopts patent litigation as a business model.)  Increase in litigation could be driven by any of these factors (or relative change in payoff from other forms of litigation!), but no direct way in which he measured.
 
RT: For Prof. Bailey: are applicant citations incorporated into the model in any way?  I know there’s reason to think examiners don’t use applicant citations very much. Can you perform any validity check by what an examiner ultimately might cite in a rejection or limitation?
 
Bailey: doesn’t look at application citations; discussed w/examiners and that’s a little too noisy to use.  Also, it would be great to include ultimate citations but not in the model right now.
 
Q: what constraints did the PTO put on you?  Replicability—can it be repeated by those outside the PTO, publication review?
 
Sarnoff: one of the premises of this research is to get the entire dataset out for replication. These are limited resources; the PTO has many demands on its time, so more access will be really helpful.
 
Bailey: 12,000 employees, 8600 examiners.  A lot of institutional inertia, not used to academics floating around.  Biggest thing for him: he was passionate about what he wanted to do, and eventually managed to interest others.  Push to improve patent quality = he was there at the right time.

No comments: