Tuesday, June 16, 2015

“How can the academy best contribute to IP policy?”

Moderator:      F. Scott Kieff, US International Trade Commission and George Washington University
 
Panelists:         Stephen Haber, Hoover Institution and Stanford University: Good policy making starts with good research. Literature on patenting lacks the serious empirical work that exists in other fields; ratio of theory/broad claims to empirical evidence is unlike other fields, particularly finance.  Academics can broaden and deepen the quantity of research and its quality.
 
Jay Kesan, University of Illinois: releasing the data is important.  Nobody else can kick the tires and look under the hood if you don’t.  Be more skeptical/keep an open mind about other inferences from the same data.
 
Alan Marco, USPTO: Help understand which inferences are justified from the data. Policymakers see a lot of evidence; they agree w/evidence based policy making, but they too frequently see policy based evidence making where reports on one side say something completely different from the other.  Need to know how to tell the good from the bad.  Better communication from academics: in understanding patents through the lens of patent litigation—self-selected cases aren’t representative of the whole, and we want to be careful about letting the tail wag the dog.
 
Joshua Wright, US Federal Trade Commission and George Mason University: The principal problem is inadequate quality control leading to overbroad claims.  Academy should take quality control seriously, and limit claims.  If you compare claims allowed to be taken seriously in antitrust, where norms of scholarship are different—they’re more modest and more tethered to actual inferences an economist would say are valid. That tethering is absent in the IP scholarship world.
 
[Jessica Silbey, for good reason, will hate the implicit definition of “empirical” at work here as “quantitative.”]
 
Haber: core problem is that there aren’t enough economists around, many $20 bills on the ground for them to pick up.  (I thought the true economist wouldn’t do that …)  Law faculty haven’t been trained that way.  Economists have upped the game in political science.
 
Kieff: has the IP/antitrust interface benefited from the antitrust literature’s standards or suffered from the deficiencies in IP?
 
Wright: both are present. The same economists rationalizing antitrust law are around.  Started with big questions—it’s hard to measure innovation! But advance in industrial economics has led to do smaller/narrower studies that can allow strong causal inferences, even if on its own it wouldn’t justify legislation. A drip drip drip of these can fill a bucket. You don’t believe papers, you believe literatures. You get literatures by getting smaller projects.  Marginal approach was taken in antitrust, but not enough in patent.  Sure, an economist is likely to say the answer is more economists, but I believe that it’s true.
 
Haber: Causation is a big deal. One can look at data and draw many inferences from it, some of them accurate and some spurious.  Data analysis w/o theory: perfect example of this is claims about increased number of lawsuits. People like to draw the inference is that contestability of patents has gone up and therefore the patent system is broken. But another inference is that patents are more valuable and more worth litigating. You can’t figure out which is true (or other explanations) just with numbers. Unless you’re working from a well defined body of theory in which case you have less causation to worry about, you need to worry about your inferences.
 
Marco: one of the problems of many economists coming in is that policymakers have high expectations.  Economists tend to be more reserved in inferences, and so that doesn’t get a lot of fanfare.  “How many jobs does IP create?”—those are the questions that economists can’t really answer.  Gap-filling: policymakers need to be more patient/understanding: you can have good empirical research or you can have fast.
 
Wright: another slam on law professors, sigh: law professors can get tenure with big ridiculous claims, but economists can get tenure with a good small contribution.  [You’re all just jealous of my jetpack.]
 
Kieff: So what are some of those $20 bills, tenure-worthy, literature-contributing projects?
 
Haber: Patent trolls: what we know is miniscule compared to the amount of noise about them. Look at the actual business, specifically: is a patent troll a financial intermediary or a predator on the innovation system?  Theory of finance would suggest that intermediaries arise when there’s some market asymmetry creating a return.  In patent space, the asymmetry is between individual inventors and large firms implementing patents.  Is the behavior of patent trolls consistent w/financial intermediaries, or consistent w/ the mafia?  (I’m not sure those are in any way exclusive, actually, given accounts of how the mafia works—and how the big banks work.  I don’t see why an economist would start out assuming these were distinct models.)
 
Kesan: that’s a project of mine: are there differences in the way they settle or take cases to trial?
 
Marco: Identify the market failure we’re trying to solve very clearly. A lot of times we take the system as a whole, but there are a lot of component parts. One area to refine analysis quickly would be to ID component parts: does patent examination/prosecution affect the way patents are used later on?
 
Wright: the source for demand for such research has to be the legal academy or the government.  The gov’t is more likely to be the source of demand than the law schools.  The FTC has played a role in some areas. ID research questions and testable hypotheses: a good role.  Do something modest and descriptive and contribute knowledge to the world.
 
Josh Sarnoff: Rule 11 motions—would be important to know, hard to collect data. We know almost nothing about licensing market, b/c of secrecy issues—but there’s not a systematic requirement to collect all the data we want to know; dramatic change at dramatic cost—only gov’t can do that.
 
Q: funders of research: what should they keep in mind?
 
Kesan: as quality of scholarship goes down, it’s seen as more of a political football than normal science. Normal science gets more funding/attention.  Convince funders that there is normal science to be done.
 
Haber: innovation policy is vitally important to the US’s future. Funding of studies of innovation policy by the gov’t relative to the importance of the task is not commensurate, and thus either the gov’t will do it or private entities w/vested interests will do it—or no one will do it.  Gov’t is better.  Federal Reserve funded most of the finance literature.
 
Q: we don’t know prices or quantities in the private market that is crucial here.  The influential work on contracts has mostly been theory, unless there’s a large literature using private data [Wright disagrees].  What is the policymaker’s role in interpreting academic evidence?  If we accept the premise that a lot of this research is bad, if the policymaker knows it, what does the policymaker do with that?  What’s the standard of disclosure for the policymaker in relying on evidence?
 
Haber: ask the question—was this piece of evidence published in a peer refereed journal or not? Simple metric, easily applied. Plus it’s never the case that one study is dispositive.  The Q for policymakers is whether there’s a literature whose weight points to a conclusion.
 
Wright: there are now literature reviews, and literature reviews of literature reviews. The difficulty is that most studies observe either Price or Quantity.  Most of the time here we’re less interested in P or Q and more interested in rate of innovation or investments made in innovation, and the problem is that we don’t agree on what a measurement is—harder than measuring prices. But there are some papers that do this.  The observability of underlying contracts can be difficult but there are many marginal gains to be had.  I can count serious peer-reviewed research designs in last 15 years on both hands. 
 
Kieff: what framing context would you give journalists or staffers as background?  Sense of diversity, contestation/testing in the literature.
 
Marco: also read How To Lie With Statistics—statistical inference wouldn’t hurt.
 
Kesan: caution on both sides.  When I talk to staffers, I take extra trouble to disclose the limitations of my studies.  I’ve become sensitized to that after seeing the way my work has been consumed.  If you are a staffer/policymaker, it’s worth asking academics: what about the other evidence?
 
Haber: two things that are low-cost: (1) learn that Google Scholar exists.  Look for a review essay about a literature that’s been published in a peer reviewed venue. Read the first five pages, before it’s down in the weeds, and get a sense of the state of a particular literature. (2) Academics need to meet staffers and journalists on their turf, not the other way around. We should be better at making our work more accessible and making our work more available.
 
Kieff: ideas for organizers of academic work to make their work more impactful? Should they have a governance structure mindful of their source of funds, fiduciary or other duties they might owe that might conflict with independence in writing?
 
Marco: peer review is still there to correct apparent bias.  Economists are notoriously terrible at making their results understood outside their narrow field.  NBER is getting good at putting out 2-page summaries.  Still need to know how policymakers should be using them.  Incumbent on academics to improve their communication.
 
Wright: There are reputational sanctions in economics that don’t exist or exist with less force in law. Everyone high-fives each other and says the papers are brilliant. But less of a culture of saying a paper is bad and shouldn’t influence policy.  (He does not go to the conferences I go to.)  This culture developed when law professors didn’t do empirical work.  Now, any law professor can run a regression without a license, and that doesn’t serve academia well.  He is for more shaming of law professors.
 
Marco: who do bad research.
 
Wright: at least those.  (Ha. Ha.  Though he is a very good public speaker—quite charming.)  Need research that can be replicated; that reduces fights over industry-funded or gov’t-funded source. That tones down the criticism.  There’s a demand for objective interpretation of results—meta-analysis of fields, similar to literature review. 
 
Kieff: do you think that policymakers should keep anything in mind when an academic is writing as an academic but also has clients to serve and owes duties to a client in other contexts? [Somehow I feel that there is someone in particular being targeted.]
 
Wright: it happens.  Some briefs are really good, and others aren’t.  If there’s empirical information, the fundamental question is whether the thing is valid.  If I can’t see the data, it doesn’t get any weight.
 
Kieff: should a reader keep in mind that a lawyer currently representing a client, paid or unpaid, who has taken a position on the topic being written about can’t advance to an academic audience a position inconsistent with his or her client’s?
 
Wright: I don’t know the professional responsibility component. I have a healthy skepticism for paid-for advocacy, but that’s a rebuttable presumption.  I’d be willing to concede that the skepticism meter goes up a notch or two.
 
Haber: from outside the legal academy, the presumption is that the field is about getting to the truth. This requires both careful scholarship and persuasion of others where you divulge your sources of funding, make data available publicly, share requests for data, and practice open science.  He is hearing that legal norms are different. [Because he is hearing a weird subset of claims.]  Suggests a host of institutional problems in the legal academy for playing an effective role in evidence based policymaking—seems like conflicts of interest should be divulged. Legal academy was founded on advocacy. That’s not about getting to the truth; it’s about serving the client. That puts it in a difficult position for doing social science, about getting to the truth.  How to draw some bright lines about which enterprise people are engaged in.
 
Kieff: ITC docket generates economic studies.  If we were asked to study these issues, we’d enjoy it.

No comments: