Sunday, May 31, 2009
Dilution by design
Friday, May 29, 2009
Double Denied, denied in part
Simon-Whelan, as putative class representative for art buyers, alleged that the Foundation and various defendants violated state and federal antitrust laws by conspiring to restrain and monopolize trade in the market for Warhol works. He also alleged individual unjust enrichment, Lanham Act, and fraud claims. The Lanham Act claim was based on the defendants’ denial of the authenticity of a work he owned, and the fraud claim was based on allegations that he was fraudulently induced to submit his artwork to the defendants’ authentication board and sign a convenant not to sue in connection with such submissions.
Defendants authenticate Warhols in two ways: first, the board rates individual works as by Warhol, not by Warhol, or “no opinion.” Second, a work may be informally authenticated by being included in the Warhol Catalogue Raisonné, an allegedly comprehensive listing of all authentic Warhol artwork in existence. According to Simon-Whelan, the board has “denied the authenticity of works that were previously owned by the Estate and stamped with serial numbers from the Estate, routinely denies the authenticity of a certain percentage of Warhols, particularly when several from the same series are submitted, has denied authentication as a means of retaliation, has approached owners of Warhols to "lure" them into submitting their works for authentication, and changes its authentication policies when the change suits the Board's financial interests.” The result is to create scarcity and inflate the value of the Warhols owned by the Foundation. Defendants’ submission agreement contains a covenant not to sue in return for authentication services.
Simon-Whelan’s website (you can download an image of the portrait there, which raises interesting issues of its own) recounts the saga of an untitled painting he calls Double Denied. It’s a Warhol self-portrait he bought for $195,000 in 1989, “one of several created in August 1965 at Warhol’s direction from an acetate personally created and chosen by Warhol.” Warhol, of course, was known for not doing his own work and for merging art and business. The idea of an authentic Warhol is something of a travesty, or at least a deep irony. But authenticity has value in the age of mechanical reproduction, and so we continue on: Simon-Whelan alleged that the painting had previously been authenticated by the Foundation and Warhol’s estate, including by individual defendants, and had passed through several major dealers, each of whom had vetted provenance. In 2001, Fremont, an individual defendant, urged Simon-Whelan to submit his painting to the board, and the board told an interested buyer that it wouldn’t stand by the prior authentications. So Simon-Whelan submitted the painting, at which point the board stamped the painting “Denied.” (The stamp was on the back of the painting, but allegedly bled through the canvas and was visible on the front.) After compiling more documentation, Simon-Whelan resubmitted it, but was denied again. He alleged that the denial was fraudulent, and that he was ultimately forced to sell his Warhols at a fraction of their value through third parties. Moreover, excluding his painting from the catalogue allegedly served as a representation that it was fraudulent, depressing its price.
The court held that Simon-Whelan’s allegations of fraud and wrongdoing in connection with the solicitation of his agreement with the board were sufficient to state a claim to invalidate the exculpatory anti-suit provisions of the agreement. Intentional wrongdoing can’t be insulated by such an agreement.
The court found that Simon-Whelan had plausibly alleged a conspiracy in restraint of trade, and that he had standing as a person who desired to compete in the market to sell Warhols. But he hadn’t alleged injury from the alleged price-inflationary aspects of the conspiracy, and allegations of injury from his purchase of Double Denied would be time-barred anyway. So only the antitrust monopolization and market restraint allegations survived, based on the two rejections of his painting.
Simon-Whelan also alleged false advertising in violation of the Lanham Act. Defendants argued that the submission agreement included an acknowledgement that a “Denied” stamp could be affixed to the painting; that the denial was a mere statement of opinion; and that there wasn’t “commercial advertising or promotion.” If the submission agreement was procured by fraud, it was unenforceable, so that didn’t help. And defendants’ letters denying authenticity might be found to be more than statements of opinion; it was possible they could reasonably be seen as stating or implying provable facts about the painting’s authenticity.
The problem was “commercial advertising or promotion.” Even though Simon-Whelan alleged that the catalogue is used for authentication, he didn’t allege that the defendants use the catalogue in connection with commercial offerings of their goods and services. He did, however, sufficiently allege fraud under Rule 9(b).
Star Trek and professional ethics
Here are the basics:
Spock is an instructor at Starfleet Academy. For the sake of argument, let’s call it a graduate institution, not an undergraduate institution. Uhura was, at one point, his top student in a particular class. Spock is still an instructor and Uhura is still a student; he observes an exercise in which she takes part but as far as it appears he does not have the power to grade the exercise. When crisis strikes, he has the authority to assign students to ships based on their qualifications. He initially assigns Uhura to the Farragut; when she protests, he explains that he has done this to avoid the appearance of favoritism. (Spock’s ship, the Enterprise, is more desirable.) She correctly points out that she’s the best at her job, and he reassigns her to the Enterprise. Later, on board, they kiss for what may or may not be the first time. After the crisis, they return briefly to the Academy, but quickly take up full-time posts on the Enterprise, with Uhura presumably now a graduate and Spock reassigned from his instructional duties.
Questions: Has Spock violated your institution’s rules on former student/teacher relationships? From an academic rather than a quasi-military perspective, should there be rules against this scenario? Does it matter whether their first romantic encounter comes before or after he assigns her to the Enterprise?
Disclaimers: I really like Spock and Uhura. I really like sf. I think the movie has a bunch of structural problems related to its disregard for institutions versus individuals, as elaborated here.
Tuesday, May 26, 2009
Reminder: consumer protection conference
The conference features appearances by numerous current and former FTC and state officials--including a greeting by the incoming head of the Consumer Protection bureau, David Vladeck--as well as prominent private practitioners. Sessions cover issues including internet issues (with special attention to the perhaps surprising scope of Section 230 of the CDA), privacy, the use of empirical evidence, and the different standards applied by different regulators, including the FTC, NAD, and courts applying the Lanham Act. In timely fashion, we’ll finish up with a panel on the proposed Financial Products Safety Commission, which has been much in the news.
Monday, May 25, 2009
Promoting copyright myths
Some commenters online are at least attempting to correct the article’s distortions, but as one might expect they’re competing with comments about how the right solution is to mail a copy of your work to yourself and rely on the postmark to prove ownership.
Thursday, May 21, 2009
Another note on standing
The proposal:
To address the concerns and criticisms of the current approaches to standing under section 43(a), courts should adopt a three-prong bright-line standard under which a plaintiff must show: (1) the injury is of the type contemplated by Congress when it enacted section 43(a) (i.e., it is commercial or competitive in nature); (2) there is a causal link between the injuries and the alleged false advertising; and (3) no other party is better suited to bring the action.
The proposal has some appeal, but the last prong suffers from vagueness: how might it be pled or proved? What if a third market participant has a bigger market share than the plaintiffs: is the plaintiff then worse than the third party such that only the market leader (or second-place finisher, if the market leader is engaging in false advertising) should be allowed to sue? If not, why not? Shouldn’t the burden be on the defendant to show someone better suited to sue for competitive injury?
And the note adds some odd ideas about using standing to get rid of cases where someone other than the plaintiff thinks the injury doesn’t justify the cost of litigation, which is particularly unnecessary in the contest of Lanham Act litigation, where attorneys’ fees awards are relatively rare. The conclusion that Phoenix of Broward was rightly decided for the wrong reasons is particularly troubling, and indicates that the proposed test isn’t nearly as clear-cut as the author thinks it is.
Here’s the basic idea, also endorsed by the Phoenix of Broward court: the false advertising at issue was that McDonald’s falsely advertised that high-value prizes were available in its contests. In fact, due to fraud by employees of the agency employed by McDonald’s to administer the contest, high-value prizes weren’t available, though lower-value prizes were. The court thought that the small chance of winning the high-value prizes couldn’t have been a big factor in driving sales to McDonald’s, thus the causation was too attenuated.
The problem is that the court made this fact-intensive determination in deciding a motion to dismiss. Subfactors of the problem: First, plaintiffs should have been entitled to a presumption that the high-value prizes were important to consumer decisions, given the prominence of the high-value prizes in the advertising; McDonald’s thought the high-value prizes mattered. Second, plaintiffs should have been allowed to submit evidence that small chances of winning big prizes have a powerful impact, whether you want to call this the result of consumers’ taste for risk or of their weakness in correctly evaluating low-probability but high-payoff events. Though there certainly are cases where causation is sufficiently implausible that a case can be resolved on the pleadings, this wasn’t one of them.
Wednesday, May 20, 2009
Best practices for fair use video
New Video Breaks Down Fair Use Guidelines for Online Video Creators
American University’s Center for Social Media and AU's Program on Information Justice and Intellectual Property, in collaboration with Stanford Law School's Fair Use Project, are launching a new video explaining how online video creators can make remixes, mashups, and other common online video genres with the knowledge that they are staying within copyright law.
The video, titled Remix Culture: Fair Use Is Your Friend, explains the Code of Best Practices in Fair Use for Online Video, a first of its kind document—coordinated by AU professors Pat Aufderheide and Peter Jaszi—outlining what constitutes fair use in online video. The code was released July 2008.
“This video lets people know about the code, an essential creative tool, in the natural language of online video. The code protects this emerging zone from censorship and self-censorship,” said Aufderheide, director of the Center for Social Media and a professor in AU's School of Communication. “Creators, online video providers, and copyright holders will be able to know when copying is stealing and when it’s legal.”
Like the code, the video identifies six kinds of unlicensed uses of copyrighted material that may be considered fair, under certain limitations. They are:
- Commenting or critiquing of copyrighted material
- Use for illustration or example
- Incidental or accidental capture of copyrighted material
- Memorializing or rescuing of an experience or event
- Use to launch a discussion
- Recombining to make a new work, such as a mashup or a remix, whose elements depend on relationships between existing works
For instance, a blogger’s critique of mainstream news is commentary. The fat cat sitting on the couch watching television is an example of incidental capture of copyrighted material. Many variations on the popular online video “Dramatic Chipmunk” may be considered fair use because they recombine existing work to create new meaning.
“The fair use doctrine is every bit as relevant in the digital domain as it has been for almost two centuries in the print environment,” said Jaszi, founder of the Program for Information Justice and Intellectual Property and a professor of law in AU's Washington College of Law. “Here we see again the strong connection between the fair use principle in copyright and the guarantee of freedom of speech in the Constitution.”
Remix Culture: Fair Use Is Your Friend is a collaborative project of the Center for Social Media—a center of AU's School of Communication—and the Program on Information Justice and Intellectual Property—a program of AU's Washington College of Law—along with Stanford Law School's Fair Use Project. It was funded by Google.
Scandalousness and internet evidence
Also of note is the extent to which the TTAB’s opinion engages, or declines to engage, with the applicant’s argument that “pussy” might mean a lot of things. The real answer to that is: Come on! But translating that answer into legal reasoning can be difficult; the TTAB proceeds by pointing out that reporting on the company talks about its “provocative” name, indicating that the press isn’t thinking about pussy willows, pus-filled wounds (really, applicant? That’s what you contend people might think of ? Did you even want to be taken seriously?), or even weak boys/men (the TTAB does not connect that meaning with the scandalous meaning; there’s a probably correct implicit assumption here that calling a guy a pussy isn’t as scandalous as using the term to refer to sex with a woman). But the TTAB ultimately finesses the question of whether PUSSY, as applied to an energy drink, is scandalous—every reason given why people would understand the scandalous meaning would apply to almost every good or service I could think of, though perhaps some pet-related businesses could get off the hook as double entendres.
Tuesday, May 19, 2009
MIT: Tuesday
Carliss Baldwin: Drawing the Boundaries of Intellectual Property
Competing social arrangements for innovation. Producer innovation: independent inventor, vertically integrated firms in an oligopoly, strategic alliances for knowledge creation, and modular clusters of forms have grown up along with user/collaborative innovation, all in competition and interaction/hybridity. Hybridity where one provides a platform: Valve Software provides an engine for Counterstrike, a user-created game. Modular cluster of firms around Linux: open source sitting within firms making different pieces of the system. Users might also be able to subcontract with firms to do their work for them. Hybrid social relations require IP modularity.
All innovations are new designs or changes in existing designs, therefore designs are the basis of much (but not all) IP. Design’s structure can be represented as links between design dependencies: if A changes, then D, F, Q, and Z need to be reviewed and might need to be changed. Modules of the design exist in a matrix that can be graphically represented. Different designs with the same functionality can exhibit different structures.
Knowledge is an overlay on the design structure, associated with individual elements or groups of elements. Knowledge “chunks” are given an IP status by their owners. Incoming IP you might be using, outgoing IP you might be generating. IP status is the combination of legal protection chosen by owner and owner’s access policies. The IP status of different chunks can be incompatible—shared can’t be secret. Proprietary licenses may be incompatible with GPL licenses. One example of a violation of IP modularity: licensed-in code was distributed throughout the codebase; license was about to expire, creating a classic holdup problem. Solution: redesign the codebase to modularize so that there were no dependencies between the new platform and the licensed code, changing the transaction situation dramatically and allowing the company to get a new license on favorable terms. And quickly they began to use open source instead of licensed code in that module.
Andrew J. Nelson: The Musician-Engineer: Lessons from Three Eras of Technology Development and IP Management in Stanford’s Music Department
Music department has a center for computer research in music and acoustics. Hundreds of compositions, performances, visiting composers, publications. Also has over 100 patents, 39 industrial affiliates, over $25 million in cumulative tech licensing income, which is unusual for a music department.
How does an academic music department come to engage in leading-edge technology development? How has their status as user-innovators affected their attitudes towards IP?
Frustrated by the limited timbre of the 100 usual orchestra instruments, one guy turned to the computer to compose. He wanted to write a new composition and created frequency modulation synthesis, which was cheap digital synthesis. The tech licensing office thought it was an interesting technology and licensed it to Yamaha in 1975. Other contributions to things like surround sound.
1970-1983: applied for only four patents in this period. Almost all the discourse to outsiders is focused on new compositions. New technologies not incremental advances.
1984: revenues start pouring in from the Yamaha deal, and that changes the attitudes. 1984-1997, 100 patent applications. Discourse centered on new tech and commercial possibilities; established an industrial affiliates program and formalized external relations; Office of Tech Licensing develops a TM program and identifies the music dep’t as the likely source of the next blockbuster tech to replace Cohen-Boyer rDNA—poured millions of dollars into developing new tech. Massive failure: department nearly loses its shirt.
1998-now: the educational entrepreneur. 3 patent applications; full embrace of open source; logo now has the Linux penguin! New approach to monetization: summer programs with experts to learn the open source tech in a hands-on way. Dozens/hundreds of industry reps at $10,000/head—nets more for the department than any IP management they’d done before.
Projects: looking at longitudal diffusion mechanisms and the role of public v. private orientations; multivocality in grant applications—how do you pitch the same project to the Defense Department and the National Endowment for the Arts at the same time? And measuring knowledge flows in network evolution and geographic reach.
Joachim Henkel: Optimizing the Trade-off between ‘Open’ and ‘Proprietary’
Profiting from innovation is commonly assumed to require exclusion mechanisms: patents, secrecy, complementary assets. Shortcomings: “free revealing” isn’t included in standard models; no account is taken of interaction effects. How to integrate free revealing into a study of profiting from innovation?
Different control mechanisms (rights, secrecy, revealing, complementary assets, learning-curve advantages, subsidies) and different appropriation mechanisms (use in own products/processes, exclude others, license others, benefit indirectly from others’ use, intrinsic benefit from innovation). Empirical setting: private branch exchanges in the communications industry. They don’t do much free revealing, but he studied contributions to open standards.
Interviews with industry experts: size of patent portfolio turned out to be important. There are interactions between patents and lead time advantage—if you’re ahead in research you can file for patents others won’t have, and inventing around your patents increases your lead times over other competitors—people who say that patents aren’t that important may be ignoring interactions between patents and lead time. New software to capture interactions. Respondents are asked to tell which company will profit more; companies are said to differ in things like patent portfolio, contributions to open standards, time to market, and sales/service quality.
Geertrui Van Overwalle: Patent-based Collaborative Licensing in Genetics
Empirical question: what is the impact of patents related to genetic diagnostic testing on access to diagnostic testing services? 22 top genetic diseases; looked for relevant patents and blocking effects. 250 relevant patents, 145 active, 267 independent claims.
Findings: 15% of claims were almost impossible to circumvent (blocking), 49% difficult to circumvent, 36% easy to circumvent. Expected that there would be lots of blocking patents on genes, but that was not the case (3%). Instead, there were lots of blocking patents on methods (roughtly 30%). What is the best way to deal with patent thickets? Hypothesis: Formal rules of contract.
All collaborative models in genetics are based on the preexistence of IP rights: open source uses IP as a platform; licensing clearinghouses; patent pools. IP can leverage access. Different models may be appropriate for different types of uses—universities; if owners are both producers and users (v. NPEs, I guess).
Sheryl Winston-Smith: IP Rights and Entrepreneurial Innovation in the Medical Device Industry: David, Goliath, and the Patent Office in Between?
Highly concentrated industry: top 4 companies have 80% of sales and of R&D, but there are a lot of start-ups where competition plays a big role—corporate venture capital is key. Big companies are worried about a competitor acquiring a startup and dominating the market, so they take equity stakes. It matters who the founder is. Entrepreneurial clinicians are able to produce more directly relevant innovations. Intersection between human and financial capital: how does that influence outcomes down the line?
How and when do IP rights align interests between outsiders and incumbents? How would IP be addressed if an outside user modified a device substantially? Open/user and entrepreneurial innovation in medical devices involves significant attention to allocation of IP rights. Sometimes it’s done by contract: paying the clinician; consulting relationships in which the company owns the work. Sometimes the entrepreneur licenses the right to innovate. Billion-dollar settlement for entrepreneur who sued Medtronic: interests are not always aligned. Entrepreneurs need capital to get ideas to market—as users they may have insights about unmet market needs. The corporate venture capitalists need innovations—they want breakthroughs every 3-4 years, and the cycle is shortening.
Entrepreneur can usually get a patent; this provides a basis for negotiation. What is the optimal timing of the approach of the entrepreneur to the corporate VC? Patent protection may be insufficient if they approach the VC too early; the corporation may be able to reverse engineer, build around, or find components elsewhere. Or the entrepreneur may hold on too long and never get to market, or the CVC might get the idea anyway and never reward the entrepreneur. She is researching optimal timing now.
Jeff Furman: Who Benefits from Openness in Science?
Standing on the shoulders of giants requires institutions to preserve knowledge: the loss of the Library at Alexandria set the world back. Have to separate selection effect (how does certain information end up within the institution) from treatment effect (the effect of the institution on how the knowledge is used). Knowledge is not randomly designed, for example university v. firm patents. University patents are cited more broadly than firm patents; we could conclude that universities are better at diffusing knowledge, or just that the sets of problems worked on are different—if firms were working on the same problems and still not cited as much, there’d be better evidence for the “better diffusion” hypothesis.
So, look for reasons that are exogenous to the knowledge itself that shift knowledge between types of institutions and see how the knowledge behaves. Biological resource centers: they collect and offer access to biological organisms for research/commercial development—cell lines, microorganisms, tissue cultures, animal models. Peer-to-peer networks function for research tools, but they can break down for personal or discriminatory reasons. A public deposit may work differently/better. 300+ BRCs around the world; the largest in the US is in Manassas, VA.
So, these cells existed in the P2P network, and then they got transferred to BRCs for exogenous resources. Look at how that affects citations to the foundational work related to the cells. Citations rise after deposit in BRCs, but were flat before the transfer. Moreover, the type of publications generated changes—non-US researchers generate more publications; researchers at non-elite US institutions generate more publications; citations in top journals to papers not in top journals rise, as do citations by elite academics to papers by non-elite academics.
Q&A:
Q: given the costs of preserving BRCs, there’s no long tail—so what do people want to invest in preserving?
Furman: People running these want to preserve everything forever, because they are interested in biological diversity. Costs are relatively low for any one material, but large overall. Federal funding in US has been declining, which has made them pare down the collections that don’t circulate much.
Session 6: Law and Policy
(Moderator: Andrew W. Torrance)
Michael Meurer: Dividing the Spoils: Fair Division
Game theorists investigate: what is the appropriate/expected division of benefits from joint action that produces aggregate benefits in excess of individual action?
Suppose A, B, and C could collaborate: A+B+C or A+B or A+C produce 6 units, whereas B+C or any of the three standing allone produce zero. How to allocate if they all cooperate? Egalitarian = 2 for everyone. Proportional: can’t do it because standalone product is 0. Shapley value: 4 to A, 1 to B and 1 to C. Nucleolus: 6 to A, because A is the key in any solution and neither B nor C is required.
The “contested garment” rule in Talmudic discourse: when A claims ½ of a garment and B claims the whole, how do we divide it up? ½ is uncontested and goes to B, and ½ is contested and is split. The nucleolus is plausible because people actually come to it, and to the Shapley value, without any knowledge of game theory: TVA asked how to allocate the benefits of its dam and came up with the Shapley value, a generalized version of the Aristotelean proportionality rule that is sensitive to concerns about team play. Shapley value and nucleolus tend to minimize group defection, where it’s possible to do so.
Do the axioms for the Shapley value or the nucleolus appeal? They may appeal under different conditions.
Wendy Seltzer: Intermediated User Innovation
Even when we’re engaged in P2P file transfer, we’re actually using intermediaries to accomplish the transfer, and those intermediaries have their own incentives: make money, avoid litigation, avoid costs of mediating disputes. Chillingeffects.org looks at the costs of those misaligned user/intermediary incentives. Example: McCain/Palin campaign dispute with YouTube over DMCA takedown; YT had to comply with the DMCA and wait 10-14 business days, at the height of an election campaign, before restoring a video for which notification had been submitted; YT professed to be unhappy with this and invited McCain to do something about the DMCA after the election. Recently, another spat over Miss California and Perez Hilton—Hilton sent a takedown to the National Organization for Marriage’s spot showing him discussing gay marriage. After NOM protested, YT restored the video without waiting the 10-14 days, on the grounds that this was a clear case of fair use.
Possibilities: new intermediaries like the OTW, with their interests more aligned with users.
Eric von Hippel: Policy Implications of a User-Centered, Open Innovation System
Strong brands, based on TM, are a major source of firm profits: strong brands get a premium, 30-40%. What do you do to create a brand? Repeated impressions/creating links. Communities have link-creating behaviors they engage in for reasons other than creating brand communities: entertaining/interesting—they create brand strength, costlessly. There are many community-owned backpacking logos. Among backpacking communities, 85% have their own logo, 20% have identity products (what I’d call promotional goods). Often, they buy backpacks, and sew their own logos over the commercial brand. Wasn’t an attempt to create a “brand,” but people were hanging out together and creating an association between a logo/brand and an activity.
[I don’t think this behavior is properly defined as costless. The things that people do to signal participation in a community may have a spillover effect on backpacking “brands,” or you might say there’s no marginal cost to creating the brand, but then you’d also have to say there’s no marginal cost to most forms of creating a standard commercial TM (though we might categorize certain forms of advertising as pure brand-building).]
These backpacks are all made in about 5 factories in China and Vietnam, with variations specified by each brand owner.
Survey: which would you prefer to buy, a commercial backpack with logo, or a commercial backpack with community logo? 34% of community members preferred the latter, and it had a brand premium for them over the market price. If it could be sold 1/3 cheaper, 2/3rds of people would buy it instead—which it could afford to do because brand creation was costless.
John Wilbanks: Science Commons
Extending CC approach beyond the CC license: standard contracts tilting towards sharing in human- and machine-readable form. Work with people who already want to share stuff and connect them, rather than (directly) trying to change people’s minds. CC0—a waiver of rights in a database collection. Machine-readability allows people to integrate databases and licensing in articles, so you can click through and find the provenance of data.
Next question: can a commons automate pharmaceutical workflow outside a pharmaco? It can help coordinate legal rights and desired behaviors. About 350 vendors are registered, from people involved in publishing to people who ship the raw materials. The financing crisis has actually been helpful in getting people on board, because there seem to be efficiencies from coordination.
Approached by Nike to share information related to sustainability technology. A virtual clearinghouse. Nike has a patent on water-based adhesives and they gave it away; then another manufacturer took it and advertised that it was the first to convert all its factories to water-based adhesive, without giving Nike credit. Nike didn’t like that.
Designing a one-click public license: not viral. Allows anyone to license, requires attribution, and if your revenues are greater than X you pay a yearly fee of Y. Informal working number, $30 million/$50,000. Another, private license doesn’t allow people to take the license if they’re competing with the patent owner. Science Commons is also taking the opportunity to reconstruct a research exemption as part of both licenses. Communities can execute/sign licenses until they hit the revenue cap.
Jeroen de Jong: Statistical Indicators to Inform Policies for User Innovation
User innovation is everywhere, except in innovation policy or statistics: EU innovation policy from the EC doesn’t acknowledge user innovation; OECD doesn’t provide guidelines for collecting user innovation.
Policy makers want to answer these Qs: (1) Frequency: is it huge? (2) Social welfare implications/spillovers. (3) Are there existing (market) failures? (4) What do you want me to do?
Frequency: there are many user innovators out there. Social welfare implications are positive: users tend to develop different products with new functionality, and in emerging industries; user innovation is marked by knowledge spillovers, free revealing, and innovations are often transferred to producers without compensation.
Market failures: no work yet systematically documents market failures in user innovation. Educated guesses: capabilities—user innovators tend to have technical capabilities, so if they don’t, they won’t be able to innovate. Network failures (communication). Indivisibility (modularity). But more systematic and empirical work is needed on what goes wrong.
Demark has a program for user-driven innovation. Grants for Danish companies—hiring ethnographers to document user needs and support them. Dutch innovation performance contracts: subsidize collaborative innovation at 50%. Innovation by a group of 15-35 small firms, under the supervision of a coordinating organization such as an industrial association. They contract with each other. In practice, most contracts focus on user process innovation—e.g., lightweight design of boats to reduce fuel consumption. However, external sharing/publication is not mandatory.
Statistical indicators are very important. Statistics is a main reason the linear model of innovation still exists today, despite criticism: policymakers have no other data. Without changing the indicators of innovation that policy entities measure, we can’t expect policy change. However, OECD is changing.
My question for Eric: What is the problem you want to address with the branding idea?
Von Hippel: Overcharging via brand premium.
Major UCL ruling in California
Monday, May 18, 2009
MIT, afternoon session
Pam Samuelson: Empirical Evidence of the Importance of Open Source to Software Entrepreneurs
Survey of high tech entrepreneurs, predominantly software, computer-related hardwar, biotech, and medical device firms. Mailed to 15,000 firms drawn from Dun & Bradstreet and Venture Expert, 1333 responses received. Setting aside defunct firms/returns, response rate 12% for software & IT, 25% for biotech and medical devices. Also got external data about nonrespondents and didn’t find statistically significant differences in firm characteristics, except more companies from the West than the East responded (Berkeley effect). Paper to be published in the Berkeley Tech L.J. Spinoff article: look at software respondents, 708 firms in the sample. One quarter venture-backed, ¾ Dun & Bradstreet. 69% answered by CEOs, 12% by CTOs. 85% were brand new startups, not joint ventures or spinoffs; varied funding sources.
Goals: D&B firms most wanted to remain private, Venture Expert (VX) firms wanted to be acquired/have IPO. Average number of employees: 58. Roughly half engineers.
Do they own/have pending application for patents? Software firms: 35.5% yes. VX firms: 68% yes; D&B firms: 24% yes. Non-software firms: 82% yes. Varies by sector: 90% of internet software companies had patents v. only 21% of VX internet content companies.
Why patent or not? They cited protection from copying; enhancing reputation and increasing likelihood of financing/IPO were significant. Nonpatenting firms cited costs as the most significant factor (27%); perceived nonpatentability also mattered. First mover advantage was more important than patents; for nonsoftware companies, first mover advantage was not as substantially different from patents. Patents were last on the list of things that they thought offered a competitive advantage.
More than 70% of software firms use open source; only 34% of non-software firms do. Some variation by subsector. More than 1/3 use open source as part of business model, VX = 30% and D&B = 35%. Least used in internet content (20%), most in internet software (42%). Software entrepreneurs are less likely to seek patents than some expect; they don’t do so for competitive advantage but for reputation and likelihood of getting financing.
Allesandro Nuovalari: Innovation Without Patents (XVIII-XIX Centuries)
Historical story in textbook: “there was no economic growth before the industrial revolution because there were no patents.” This is not so. A large amount of invention occurred outside patent protection. Patents were not necessary for the Industrial Revolution. Patenting rates across industries in 1851 in Britain and the US were very low, often under 10% depending the industry. The “great” inventors (entries in biographical dictionaries)—1650-1850 in Britain, 40% of the greats never took a patent, and yet are credited with at least one important inventors. A different sample of the greats: 32% never took a patent. And this is biased, because the inventors in such accounts are the romantic inventors who tended to work alone, rather than in groups.
Collective invention: the case of the Cornish engine. When the patent expired, surge in growth of improvements. Collective invention was a critical source of innovation during this period. The Cleveland (UK) iron industry 1850-1875, the English clock and instrument makers, and the Cornish steam engine makers after 1800. Contrary to accounts, these are not uncommon cases and they were not vulnerable and ephemeral—these were foundational technologies for industry. Many other examples of collective innovation: Lyon silk industry, Berkshire paper-making; Western steam-boat; Viennese chairs; Japanese cotton spinning; Norwegian brewing; wind power in Zankstreek.
So what were the available alternative to patents? Secrecy/lead time advantages; prizes and awards; procurement by government/military; patronage.
Andrew W. Torrance: Patents and Regress in the Useful Arts
More on the simulation he discussed earlier: allows open source options, licensing, and many other variations. Sometimes you infringe other people’s patents, or are accused of it. Potential strategies: play it safe (avoid infringement); fast and loose; some people patent a lot and sue a lot, others pursue mixed strategies. 30-minute games. Players appeared to enjoy the game—they loved to win (lots of fist-pumping and insults thrown at other players). They tend to seek patent protection where available. Hypothesis: rational strategy of sowing (incurring costs) then reaping. Players like to license, and they like to sue—they get attached to their patents. Modest use of open source.
Further tests: how different parameters affect the results—ease of patentability, ease of enforceability, patent term, prior art, information costs, number of players, game duration, damages/injunctions. If people come back and play day after day, do the results change? What if representatives of different populations play each other? Is there an optimal set of parameters that makes the patent system behave better than the alternatives? What are the parameters for optimal hybrid regime performance?
Fiona Murray: Role of IP Rights and their Enforcement in Knowledge Accumulation
Knowledge production is step-by-step: outputs of one process are often inputs of the next. Knowledge is often multifaceted: key outputs include information, materials, know-how, methods. Knowledge is non-rival and can be input into many follow-on experiments. Mere production of scientific knowledge doesn’t guarantee its use by follow-on scientists. Institutional arrangements matter: disclosure and access can be facilitated or hindered by law, norm-based, firm-oriented or community-oriented solutions.
Supply side: knowledge inputs are increasingly costly. Demand side: scientific community is increasingly internationalized and growing.
Response: formal institutions for material and information access. And formal institutions for licensing and other IP uses.
Empirical questions: how do you know that IP is producing the effects we see? How can we measure the follow-on innovation we care about? How can we compare the alternatives?
Looked to whether events in science produced new authors and new projects: open access to a particular variety of genetically altered mouse did in fact produce these additional materials. When we compare IP to open systems, we see more diversity in scientific research in the open system. But scientific communities have traditionally been hierarchical and status-driven; when we talk about informal norm-based systems, we have to consider whether that will really result in democracy, or only in other kinds of restrictions on participation?
Jim Bessen: Patents as Property
Patents as compared to mineral rights, land rights, etc. It’s said that patents solve the free rider problem and incentivize innovation, and that’s true of ideal patents, but actual property rights might or might not work. We only seem to talk about ideal patents; if we were discussing tradeable pollution rights, we’d be paying more attention to institutional design. We need more discussion of reality/institutional features in patent.
Property rights work well if they’re well-defined and enforceable, but economists rarely define what that means. Without good definitions of the rights, you can get multiple claimants to the same assets. Poor institutions may allow multiple claimants for an innovation—software patents, where RIM and NTP both claim to own. There may also be a technological mismatch, where the economically useful asset has multiple claimants—the anticommons, where multiple owners have formally defined rights but the rights don’t work in practice.
If we have a single owner with relatively certain rights, that’s the classic ideal case. Single owner with relatively uncertain rights = overuse of patents. Multiple claimants with relatively certain rights = anticommons. Mixed—one claimant with certain rights plus many others with low-probability claims, like patent trolls = ambiguous. This last is a situation of notice failure/underuse of patents—a situation where people ignore rights and then you get disputes and litigation.
Practical questions: eBay: do stronger penalties improve welfare? Are “thickets” a problem if rights are ignored, for example in biomedical research and software? Ignoring patents is a type of behavior that comes out of a sick institution.
Rents from patents compared to the risk of litigation costs: for chemical and pharma firms, the benefits far outweigh the costs. For other firms, in the mid-90s, the relatively even balance disappeared and costs tripled over benefits. This is a notice failure situation. Property rights are being ignored; boundaries are poorly defined.
Q&A: Torrance answered a bunch of questions about his simulation, including how you know how much the thing you might choose to patent is worth; this can apparently be tweaked and can change over time as different things enter the market, but is not yet probabalistic as far as I can tell.
Q: Why don’t we see more cocreation across institutions in the sciences?
Murray: At MIT, 50% of publications are authored with people outside the institution. But there are huge variations in number of coauthors, which may have to do with the structure of credit. We don’t know enough about some of these structures. What gets people to meet up in small conferences or other places where ideas can be shared and realized can be dependent on things like whether they can arrange to travel; Linus Pauling couldn’t because he was under suspicion for being a communist, so he didn’t really get his best chance to figure out the structure of DNA.
Session 4: Norms-Based Systems for Innovation and Sharing (Moderator: Karim Lakhani)
Katherine Strandburg: User Innovator Community Norms and Research Tool and Materials Sharing
Inspired by empirical work by W. Cohen et al. showing that research tool patents are routinely ignored by pretty much everyone, academics and commercial researchers. Problems are more evident with sharing of research materials. Researchers as a competitive user innovator community: they invent tools for their own use, benefit from using tools invented by others, and compete for research results so also benefit from exclusive use of their own tools. Simplified rational choice model is a prisoner’s dilemma for a user community composed of identical members (though of course they’re not actually identical). Cost of sharing innovation turns out to be very important, as well as benefits from exclusivity and benefits from sharing. Under varying circumstances you can get: free revealing no matter what; sharing if there’s a norm of sharing; or no sharing because exclusivity is too attractive. Policy lesson: if sharing is socially beneficial, tweak the parameters.
Application to research tools and materials: three key sets of variables that influence parameter values or enforceability of a sharing norm. (1) Garden variety (only useful for research) v. dual purpose (has a use in research and a use outside research, for example a diagnostic test)—affects the benefits of exclusivity. (2) DIY (a tool you can make in the lab) v. material (something that requires material basis or tacit knowledge)—affects the cost of sharing. (3) Academic scientist v. industry scientist—researchers’ preferences affect the benefits of exclusivity.
To promote materials sharing, we’d want to minimize cost to the inventor. Centralize distribution: centralize repositories, license a commercial provider. Increase returns to sharing by giving co-authorship/acknowledgement. Increase penalties for noncompliance with norms: journals or funding agencies can make sharing a requirement for publication or for getting materials. Reputational penalties might also be used. Norm entrepreneurship: to stabilize sharing norms, for example 2007 university white paper on tech transfer encouraging academics to resist onerous licenses and reserve sharing rights within the academic community.
Chris Sprigman: New Research on IP Norms
IP’s negative space. Began research on the fashion industry; recent work on stand-up comedians. Fashion works in a low-IP equilibrium—there’s a mix of sharing and property; TM serves important functions. Stand-up comedy produces virtually no litigation, though joke-stealing is known. Comedians use anti-appropriation norms instead, monitoring by the community.
Coming at the question from the other side: doing experiments on how creators value their IP. Questions: are created goods subject to an endowment effect? Is there some other effect that causes owners of created goods to depart from rational valuation? What are the implications for a rational choice model?
Subjects are asked to write a haiku. Most UVa students can do this. Induced value: $100 at stake. Ten people per section = expected value $10. Ask the students what they’d accept to transfer the chance of winning the prize to a bidder. And ask other students what they’d pay to get the chance of winning the prize. Different treatments: First, authors write poems for a contest, chosen on merit. Second, authors write poems which are lottery tickets: thus they can investigate role of perceptions of quality. Owners are given poems written by authors, and they are subject to the contest/lottery conditions too. Do authors have a bigger gap between willingness to accept and willingness to pay than owners? This is a question about intermediaries.
Maybe no one thinks they’re a below-average author, or maybe they’re worried about their quality and thus underestimate their chances. The experimenters debrief them afterwards to make sure they understood the conditions. Current finding: very large gap between willingness to accept and willingness to pay in the author conditions.
Andrew King: Kitchen Confidential? Information Transfer among High-end Restaurants
His research focuses on decentralized and voluntary institutions: codes of conduct, best practices, sharing organizations—as opposed to involuntary and/or centralized institutions, including culture (involuntary decentralized), government regulation (involuntary centralized), and firms (voluntary centralized). Main finding: self-regulatory institutions arose in response to exchange problems. They arose in the hope that they’d work without a lot of transparency or teeth. But without transparency and sanctions, you get adverse selection/moral hazard; firms get worse by participating. But self-regulation comes into existence when there’s an industry problem, like Bhopal disaster.
Can we expand this to public goods? He and his co-investigators looked at fashion. There are not enough fashion companies to do large-scale confirmatory research, so he’s now working on restaurants, interviewing highly-ranked chefs in Milan and Boston. Main findings: more likely to transfer information to a nearby chef with a similar quality point—similar décor level, similar cuisine. That is, their closest competitors. That seems absurd. Chefs at a certain level say that chefs don’t copy. But they actually do copy. They just copy at a distance—only people from other places and other career points (students in competition).
Chefs also say they wouldn’t screw their neighbors because they never know when they might need five pounds of swordfish or a bus boy; they expect reciprocity beyond recipes. It’s a repeated game. (Chefs are like farmers and ranchers in Shasta County!)
Research is ongoing. Real problem is separating out monopoly competition from the effect of norms. You don’t want to look like the next guy because you need to maintain product distance, so you wouldn’t copy your neighbor’s recipe exactly even if s/he told you exactly how to make it. How do we separate that from institutional restraints on copying?
Stefan Bechtold : TV Show Formats: A Global Licensing Market Outside IP?
American Idol has an Iraq version, and is in 40 other countries. Big Brother is in the Philippines, and over 60 other countries. Who Wants to be a Millionaire? is in over 100 countries. Farmer Wants a Wife is in Switzerland and over 16 countries. 3-4 large players worldwide develop these formats. Many originate in Europe. Meta formats: I Survived a Japanese Game Show, on ABC.
Name can be protected by TM. But is there any other protection? Copyright is very messy: idea/expression dichotomy would usually deny format protection, though some countries have applied copyright. Limited caselaw concerns the secondary licensing market (where the format has aired in another country)—it’s usually about the primary licensing market, where a producer offers the format to a TV station that just takes the format without paying. Some countries might apply unfair competition law. Trade secret seems unlikely; no known business method patents. Research continues on this.
You’d expect widespread copying. And there is some of that. But there’s also a $16 billion/year licensing market. What are people licensing? He is talking to industry members. Is there an interaction between national IP laws and the national TV show format industry? Are there private substitutes? There is an international format registry with an alternative dispute resolution procedure; a format bible, with standardized elements so you can define your format precisely for later comparison; trade shows in Cannes and Las Vegas.
Hypotheses: (1) social norms in close-knit community, with about 100 firms; (2) uncertainty/licensing not to be sued; (3) licensing tacit knowledge; (4) theory of the firm.
Rebecca Tushnet: Transformative Works/Transformative Workers
Let me restate Eric’s introductory description of his research: lead users innovate where no market yet exists, then collaborative user communities improve and filter innovations. Then manufacturers enter, usually nonincumbent manufacturers coming out of the user communities. Eventually incumbents decide that the market is big and secure enough to enter.
This is actually one coherent way to tell the story of what we now call “user-generated content.” First there were mailing lists, usually run from universities by students or employees, and Usenet newsgroups; later individual websites and archives, sometimes sourced from mailing lists, hosted by individuals. Companies emerged to host personal sites in return for payment or for advertising—Geocities, which is about to shut down. Specific to media fandom, where people create works based on existing popular texts like Star Trek or Harry Potter, a few people started hosting archives for money, at least for a return sufficient to offset costs, by running ads—fanfiction.net is the biggest example in the media fan community. Google bought YouTube and other mainstream sites arrived—again, from the media fan space, FanLib, a Hollywood startup designed to monetize fan contributions on behalf of major media investors. Notably, it went out of business after about a year.
And this is part of the problem that content industries are facing: there really doesn’t seem to be as much concentrated money in user-generated content as there was in traditional media, even if there is as much or more value. Copyright industries have not figured out how to make the kinds of quality improvements von Hippel identifies as allowing manufacturers to profit from user-generated innovations.
Relatedly, there is a problem of digital sharecropping: because so much of the value generated here is affective, and doesn’t directly help the individual creators materially, the issue of exploitation of the workers producing the creative stuff is quite salient. Wendy Gordon most explicitly put this on the table: reciprocity in gift relations may, and quite possibly should, involve material support. Exploitation of creative labor for the benefit of other parties is especially of concern because media fans are predominantly women, whose work has often been expected to be unpaid, naturally. Traditionally, women’s work has not been thought to require incentives, which has contributed to the material inequalities women face.
Against this background: OTW, formed as a nonprofit, with a mission to “own the servers” and engage in public advocacy on issues of concern to media fans. We could use existing archive software to store, categorize and serve fanworks on those servers, but on philosophical and practical grounds decided to code our own. Philosophically: develop programming skills among mostly female fans; this is empowering and may provide possible material benefits outside the fan context. Practically: from long experience with fan archives, it is clear that well-documented code written with others is more sustainable than single-fan projects. If you code it yourself, you take shortcuts and just bang on it until it works, and then if you leave fandom the thing you made may just sit there until it breaks. A multicontributor project is one that new people can take over running without having that same tacit knowledge.
The OTW’s Archive of Our Own and another, unrelated media fan-friendly project, Dreamwidth (a journaling service), have become significant locations for women in open source. Statistics are hard to come by, but everyone agrees the numbers of women in open source are very small; 1.5% of contributors is a not uncommon statistic. DrupalChix say that Drupal has 10% women on the project, which is an order of magnitude improvement that gets them up to “awfully unrepresented.”
A quote from the Archive of Our Own, statistics from last year: “1134 revisions have been deployed to the Beta Archive to date, and we have had five major releases and innumerable small ones. 150 volunteers have worked on [Accessibility, Design &Technology]/Code/Test, many of whom we have trained ourselves in Ruby and other languages; we aim to teach and mentor all, women especially, who want to learn.” That number is now up to 250, 100% female, 21 contributing code, 80,000 lines of code. Women also make up the vast majority of OTW’s systems administrators.
Dreamwidth, a fork of the Livejournal code, has approximately 100 project contributors, 34 contributing code, 75% identified as female. 280,000 lines of code, though shedding lines as it moves away from the Livejournal base.
Why these projects? (1) Goals are of interest to women: the output is something they want to use. (2) Officially woman-friendly. (3) Offer lots of training and opportunities to contribute: you don’t have to be an expert already to help.
Complicated lessons: (1) Commercial and noncommercial domains can’t easily be separated. (2) Power disparities offline matter in what gets done in open source. (3) Semiotic democracy: making culture and making stuff turn out to be linked—people who think they can create one kind of thing are often willing to believe that they can create another kind of thing.
Q&A
Terry Fisher: Where do norms come from? Are they hardwired? Are they adaptive responses to constraints? Are they narrow economic self-interest? Are they culture-specific?
Sprigman: In standup, norms about copying change because the mode of comedy changes—standup becomes more personal and anticopying norms emerge. It’s hard to draw a causation arrow. Is this good? Depends on your priors.
Jonathan Barnett: distinguish between end user innovation communities and intermediate user/producer innovation communities. Much more potential for normative concerns with intermediate user/producers. The fashion industry used to behave like the description of the TV format industry: a design registry, design policing. Used a group boycott against defiant department stores, which got them in antitrust trouble and shut them down. “Haute couture” came from a guild of French high fashion houses with extremely elaborate requirements down to the number of seamstresses one must employ.
Bechtold: TV formats are very commercial; the social norms are generated because of an economic problem for the industry itself. Are there entry barriers caused by this? It started 4-5 years ago, so it’s emergent, and inherently international, which raises another set of issues.
Me: but we heard earlier that every output is someone else’s input: or apparently we don’t really believe that, if we’re making this distinction between end users and intermediate users. My people use the TV format's output as input for their creative works.
Strandburg: One way to look at “ignore patents” as a norm is to deal with dual-purpose tools. If someone uses a diagnostic test in research, no enforcement, but if someone uses it commercially, enforcement. That’s how you go from no-patent to ignore-patents as a rule. Is that a good idea? Recent lawsuit over breast cancer gene patents: the ignore-patent norm serves researchers well, but does it serve everybody well?
Sheryl Winston-Smith: Does the patent/IP system devolve into a winner-take-all system with lots of people unable to appropriate value? What kind of royalties do the creators of the TV format get, or do they alienate their rights for a one-time payment?
Bechtold: there may be no one creator. They have teams that meet to think of new formats.
Brett Frischmann: It seems so hard to define the boundaries of a TV format! Licensing looks so cartel-like; one wonders why it functions.
Andrew Nelson: Highlights the question of what we mean by norms—is norm-guided behavior distinct from rational behavior? In Jon Elster’s view, people who decide to go along with norms from fear of sanctions aren’t engaged in norm-driven behavior; if you consciously deliberate about whether to share, you might not see an immediate change in behavior but that might result in a collapse of a sharing norm.
King: On institutional entrepreneurs—if you look at environmental issues, the original sponsors often ended up doing something different than what they intended. Adopters of best practices used it as a signalling mechanism, not an improvement mechanism, because the adopters were already up to spec.
Jeroen de Jong: TV format—distinguish commercial from publicly funded broadcasters. Publicly funded may not copy as much. [There are multiple versions of Sesame Street, as I recall.]
Wendy Gordon: Chefs leave out crucial steps in cookbooks.
King: True. Causes a problem when chefs go on tour!
Gordon: Do the people in Sprigman’s experiment get to keep authorship credit?
Sprigman: Nothing transfers but the right to get the money.
Gordon: Does that test for what you want to test?
Sprigman: Q is whether their affective valuation of the poem leaks into their valuation of the revenue stream attributable to the poem. We thought about using digital photos instead of poems, but the IRB made that a nightmare.
Gordon: Formats—in the US, you could use trade dress, idea protection, copyright in compilation as means to protect them. Point is not to say that there’s an easy answer, but reminds us that categories that are not explicitly well protected by a particular type of IP are often more protected because expansionist judges have stretched various categories of IP to cover them.
Samuelson: Contest formats are pretty clearly not protected in the US, though.
Jeroen de Jong: the original format, closest to the original’s trade dress, is usually the most successful, so filing off the serial numbers might not be a good idea.
My questions for Sprigman: supply/demand in distribution: there are lots more aspiring screenwriters than movies/TV shows to employ them. Do you plan to test conditions in which success is really unlikely? Relatedly: role of nonmonetary incentives, which have been stripped from the experiment: what would happen if the winner was going to get published in the student newspaper? What would happen if you offered a bargain: we’ll pay you X to publish your poem, or you can have this lottery ticket for $10? My guess: typical X will be well below $10.
A: Yes, he’s planning to test variations. Tell people that U Va. wants photos for a calendar it’s going to put out. If the university says it’s paying v. not paying, what happens? Need to do this with people who aren’t tied by affection to the university, so may have to do it on the web. [I think that you’d also get interesting results with people tied by affection; in fact, that is where some really difficult issues of valuation/exploitation might pop up.]
User innovation at MIT
Introduction
Eric von Hippel: User, Collaborative, and Open Innovation are Increasingly Common
Users aren’t always the innovators; they do a lot of innovation in scientific instruments (77%), less in others, varying a lot by field. Sticky information: when users have hard-to-transfer information. Users tend to develop novel functional capability, like the first sports nutrition bar, but manufacturers tend to innovate in quality of delivery (like an improved power supply for a device). Each user responds to local needs using local situation information. Consequence: user innovation is widely distributed. Water vest for US troops was developed by a Texas biker/paramedic who was used to getting thirsty and used to hydrating people through IV bags; he combined those kinds of local knowledge.
Lead users innovate where no market yet exists, then collaborative user communities improve and filter innovations. Then manufacturers enter, usually nonincumbent manufacturers coming out of the user communities. Eventually incumbents decide that the market is big and secure enough to enter. Users can be big: Boeing is a user innovator when it makes machine tools to make its products.
The traditional linear model of innovation does not even show users as process actors; innovation ends with marketing. This is wrong! IP tends to ignore the user innovation side.
Case studies show that many users innovate, especially enthusiasts. 20-50% of firms develop or modify process equipment they use, at considerable expense. The most generally useful 25% is transferred to producers. IP claims are rare; they usually give it out for free.
Consumers: survey among UK consumers—have you created any products from scratch/modified any products you use in daily life to make them work better for you? 10% of ordinary consumers have done the former and 17% the latter in the past three years. Manufacturers patent product engineering; users don’t patent innovations.
What new policies are required? Infrastructure for distributed innovation. There is pressure in the market towards openness.
Carliss Baldwin: What Do the Designs Want: When Does Open, User, Collaborative Innovation Dominate Producer Innovation?
Marx: the hand mill produces the feudal lord, the steam mill the industrial capitalist, quoted by Heilbroner in “Do Machines Make History?” (1967). PCs/internet give you, potentially, open and user-based collaborative innovation. Scientific controversy: do we need strong IP/contract rights for incentives for wealth-seeking innovators to produce new designs? Or do we need to encourage communities with norms of openness and sharing to allow users and others to collaborate on the development of new designs?
We seem to agree that innovation is good, and that certain ways of organizing processes of innovation are “better”—in terms of greater social welfare, or in terms of winning in head-to-head competition. So, what kinds of designs are well matched to the social structure of open and user collaborative innovation?
Design space: some designs demand a lot of communication between maker and customer and others don’t. User innovating for her own/her group’s sake—will innovate if the cost is less than the value of the design; no external communication costs. Producers will add innovation, as long as the cost of communication and design is less than the aggregate value to the producer.
User innovation requires: a modular/task-divisible architecture; low design cost for individual pieces. Doesn’t suffer deadweight loss of monopoly pricing; permits easy recombination of ideas; makes design ideas accessible, promoting education and the creation of new designs.
Karim Lakhani: Given Micro-Contributions, Who is the Inventor?
We know OS aggregates lots of contributions from around the world. This is coming out also in music (always been true, more salient now). Data from PostgreSQL, an industrial strength database that is used by Sony and Skype, among others. Tracked every feature for a year: 55,000 lines of code written by the community. About 800 people participate, all users. They all work with different firms (except for 2 at the same firm). Example: one person identifies a problem; eight people participate in discussing the problem and various solutions, and within a day a possible solution is identified and one person works on it. In a month, another person improves on it. 23 people participate in the creation of the solution: 3 people wrote code; 1 reviewer; 2 testers; 17 discussants. The final worker got credit in the credit file, but the underlying work was critical.
On average, about 9 people participate in creating each feature. 3 participate in problem definition, 5 in development until source code committed; the contributions are pretty efficient, the vast majority of them ending up relevant to the solution.
Katherine Strandburg: A Review of the Traditional Justifications for Intellectual Property
Assumption of one traditional model that IP is necessary to avert free riding and allow people to recoup investments. Assumptions: (1) There’s a need to recoup the costs by beating out competitors; the motivation to create for oneself is insufficient—the innovator is a seller. (2) There aren’t enough first-mover/reputational/other advantages to recoup investment without IP. (3) Creator will prefer not to create at all rather than create and share.
Next increasingly popular justification for patents: inventors won’t disclose inventions if competitors can immediately copy them. Patents = early disclosure. Assumptions: (1) Trade secrecy is possible. (2) Early disclosure is preferable to trade secrecy plus independent invention/reverse engineering. (3) Creator prefers secrecy over free revealing, given a choice.
Prospect theory—controversial even on traditional terms. Broad exclusive rights promote efficient exploitation of inventions, avoid duplication. Assumptions: (1) Single right holder can coordinate optimal development of a particular line of technology. (2) Transaction costs of licensing follow-on innovation will not be prohibitive. (3) Inventors are interchangeable/they are invention managers.
Incentive to disseminate/commercialize: mostly for patents and also some copyrighted works. (I think Strandburg underestimates the extent to which this is a persuasive idea in copyright, especially for people who accept that a weakness of the basic incentive argument is that creators so obviously love to create.) Idea: there’s a “lab to market” gap—give exclusive rights to ensure investment in commercialization, and permit a “market” for ideas Assumptions: (1) Relatively large investment is required to bridge the lab to market gap. (2) Dissemination is costly. (3) The first mover/reputation advantage is insufficient.
IP law generally tries to balance incentives to invent, disclose and disseminate versus increased prices, reduced follow-on creation. Doctrinal handles include patent’s term, nonobviousness, utility requirement, claim scope doctrines; copyright’s substantial similarity test, fair use, and the idea/expression dichotomy. Bottom line: even under traditional IP view, we have no idea how to tailor these things.
Andrew Torrance: Empirical Evidence Challenging the Orthodoxy
Growing skepticism of traditional claims for IP, especially for patents. Very limited empirical literature. Moser: in 19th century, countries that offered patent protection did not have higher rates of innovation than countries without it. Lerner: 60 countries over 150 years, found that strengthening patent law didn’t seem to help innovation. Bessen and Meurer, Patent Failure: empirically, patent provides little incentive for innovation for most firms, and even drags on innovation, especially with software. May be different in biotech/pharma.
Online patent system simulation, with ability to change parameters like duration, difficulty of acquiring patent, and so on. Pure patent does slightly better than patent/open source, but pure commons does better than pure patent. Statistically significant at 5%. In total number of innovations, pure commons does a huge amount better, significant at .1%. And social utility is 10x higher in the pure commons than in the other two regimes.
Von Hippel: Not obvious that drugs/biotech are different. Drug trials, for example, are now being modularized.
Wendy Gordon: shocking to hear that mixed systems did worse in Torrance’s trial. Does this throw the legitimacy of all the hybrids we love into doubt?
Session 2: Theoretical Approaches to User and Collaborative Innovation
Yochai Benkler: "Intellectual Property" and Cooperative Human Systems Design
However ambiguous the total effect of IP on innovation, it’s unambiguous that strong IP benefits exclusion-based strategies at the expense of market- and non-market-based strategies that were not exclusion-based, whether based on knowhow or other things. So Benkler looked at the foundations of large-scale distributed production of innovation—low-cost distribution/modularization of work so as to allow small-scale contributions, coupled with diverse motivations and appropriation models. Allows for all sorts of mixed models, like IBM’s open source strategy, or YouTube, where market actors provide platforms for market- and non-market actors to distribute innovation.
Now he’s working on the microfoundations of cooperation. Once we get away from the rational self-interested actor as a sufficient model, what’s the evidence-based approach towards diverse people who are motivated to some extent by self-interest, to some extent by morals/social commitments, etc.? Trying to synthesize design levers, starting with the centrality of communication and how communication affects behavior. Moving to how we define our utility functions—with whom do we have empathy? How do we define norms (example of Wikipedia, which began with no technical constraints on defection but developed them over time)?
Experiments: he wants to build a web-based platform to run standard economics experiments on the web, identifying different effects of, for example, introducing empathy by adding a face or showing people within networks. Trying to use real-world interfaces to see how they affect user contributions. Example: voluntary music distribution sites that ask for donations; what configurations produce what level of payments? 150,000 transactions to date: levels of contribution are substantial—48% pay the typical rate of $8 an album, even though they could download for free. What happens if you randomize new subscribers to empahsize morality, show the artist’s face, etc.?
Terry Fisher: Why User Innovation Matters
Spectrum of types of innovation from centralized (pharma as we know it) to fully decentralized (like windsurfers in Maui, per von Hippel). Also have a variety of mechanisms for stimulating innovation and its dissemination: typical IP rights, grants, prizes, extralegal norms, systems of nonpecuniary rewards like prestige and satisfactions of sharing. It’s a mistake to begin by assuming a tight match between centralized/traditional IP and decentralized/no IP.
His project: Noneconomic/nonwelfarist reasons why user innovation matters. User innovation in the cultural context: fan fiction, real person slash, machinima—which tend to run up against copyright/IP hazards. Then there’s user innovation in the industrial context: hacked bicycles to run knife-sharpening machines in the developing world. These are connected in ways we haven’t often seen.
Why are these things good? The standard answer: because they’re welfare-enhancing. He’s all in favor of exploring that, but he has some non-welfarist reasons to care. (1) Cultural—semiotic democracy. Distributed innovation/creative engagement with mass-produced products leads to a more just and enriched culture and is good for the soul. Egalitarian and democratic. (2) Self-expression: commonly associated with opinion/artistic creativity. Same issues can be seen with user innovation in the industrial context, mixed with aesthetic rewards. We don’t just like things that work: we like elegant and graceful solutions to problems. (3) Communitarian: forming communities to produce and share innovation has functional advantages, but it also creates life-sustaining bonds between people. The communities around user innovation vary a lot. Woodworkers share things quite differently from climbers/windsurfers.
Methodology: Introspective aspect. He doesn’t know how to create mashups. But he does participate in industrial user innovation. Some of his happiest times are spent in the shop.
Brett Frischmann: Ongoing research projects on user innovation and commons. First project: infrastructure resources. Framework paper: research agenda for investigating commons where members of a defined community pool their contributions in a defined setting and distribute them in some way. Seek to extend Elinor Ostrom’s work on sustainable commons in the natural environment. Hoping to offer a set of questions that can be asked and answered in multiple contexts, so that different studies can be compared and contrasted.
Jonathan Barnett: Sharing in the Shadow of Property: Rational Cooperation in Innovation Markets
Game theory used to determine whether sharing can substitute for property under particular conditions. Players can cooperate, defect by claiming property (happened in semiconductors and in software, where claiming property was uncommon but then became standard), or defect by copying. Sharing regimes are viable but unstable. He looked at group size, capital intensity, asset values, and endowment heterogeneity: the last variable is the value of the innovation brought to the pool. The weakest and the strongest innovators threaten the stability of a sharing regime. Weakest: hard for them to meet contribution requirements. Strongest: they get less out of participation than others and have incentives to stop contributing/defect to property.
Sharing works best with low capital intensity requirements, small group size, lower asset values, and less endowment heterogeneity. Property is the opposite. He’s interested in mixed-form regimes, where the factors point in different directions. IP is everywhere, but show is sharing: every market that apparently supports innovation without IP uses some other instrumetn to regulate access at some point on the total bundle of products and services, and every market operates under a mixed regime where norm-intensive sharing arrangements are embedded within a property infrastructure.
Core/perimeter structure: a sharing regime at the core with a low-cost flow of information assets embedded in a property structure that regulates access, keeping out the low-endowment innovator and allowing special remuneration for the high-endowment innovator. Craft guilds; academic research (supported by reputational technology, the citation, but the university is an artificial creation supported by huge subsidies); open source as well, but access is regulated by reputation/talent and the market models are bundled with a proprietary element, like IBM’s hardware or packaging like Red Hat. Sharing works best when supported by property infrastructure.
Wendy J. Gordon: Gift Failure
Starting with Lewis Hyde’s notion of gift, married with Benkler’s work: voluntary, mutual cooperation not only produces user innovation but creates enough incentives—monetary, reputational, emotional—that people can stay alive and do it. How to frame the question we’re all interested in? Gordon is talking about gift, but gift may or may not work better than other ways of talking about these dynamics.
She was also inspired by an argument about art: she once argued that art is created by gratitude, the artist wanting to pay back what she gains when she sees the beauty of the sunset. Is this model a way to persuade, to capture the imagination?
Perfect gifts and gift failure: alternative to “perfect markets and market failure” as an analytic construct, shifting the burden of proof to people who want to start with the notion of perfect markets. A perfect gift is no more fanciful than a perfect market; before you institute IP rights, you should have to prove that there is a gift failure such that property rights are appropriate to solve the failure. A perfect gift would be: willingly given, with the needs of the recipient(s) in mind, reciprocated with good will (not anger, hierarchy, resentment) and reciprocated with money and emotional support as well as with new art. Hyde’s notion is one of cooperative community that within its own borders of artists (high culture rather than industry) gives to each other monetarily, emotionally, with community. Giving back and paying forward by creating new art. (Resentment can also be fruitful, as Harold Bloom reminds us.)
Range of incentives/inputs for cultural goods: control; money; fun/self-expression/satisfying the itch to affect the world, etc. The GPL explicitly says it’s not a gift—Gordon asked Eben Moglen why not. Answer: Moglen doesn’t want resentment; people should not feel that they are on the lower end of a hierarchy. But Gordon thinks that most people don’t feel resentment from gifts, but gratitude.
For high culture, money is a high-cost mode of incentive, when produced only through bureaucracy/advance permission, in contexts that require sponteneity. Gift is especially important when it forms the context for creativity—consider the expressionists, who worked for each other, to challenge, support, and teach each other.
She is proposing a comparative heuristic: a particular variant of commons/user innovation. When is gift a useful way to think?
Victoria Stodden: Free Revealing in Computational Science
Scientific output is changing: traditional view was hypothesisàexperimentàfinal paper, the last of which was what was shared. Now there are new communication mechanisms. Can have the same hypothesis, but now have the ability to share results, code, data, along with the final paper.
Why don’t scientists avail themselves of these communications technologies? Possible explanations: (1) Scientists are primarily motivated by personal gain/loss. (2) Scientists are worried about being scooped.
Survey of computational scientists in the subfield of machine learning, sampling American academics registered at the top Machine Learning conference, students and professors. (This allowed her to limit the inquiry to those subject to the same IP regime.) 290 surveys, 60 responses so far, still coming in.
Biggest reason not to share code/data: the time it takes to put it into a form they’re comfortable sharing and that they think that other people can use—documenting and cleaning it up. Dealing with questions from users is another significant reason to avoid sharing—also a private incentive. Less significant: worry of not receiving attribution, other private incentives.
Top reasons to share: communitarian norms—encourage scientific advancement, encourage sharing in others, improve the caliber of research, be a good community member. 82%/67% (code and data) also cited private incentive of attribution.
Surprise: scientists are not that worried about being scooped. Private incentives appear to be key to not sharing, while less important to sharing. (Hmm. The attribution reason to share was just as popular as several of the communitarian reasons; respondents didn’t have to pick just one reason to share, so it seems that private incentives could still be quite important to motivate sharing.)
Karim Lakhani: The Patterns of Innovation Generation in a Collaborative Community: Exploring the Relationship between Knowledge Novelty and Reuse
Friendly collaborative competition among software programmers, in a wiki-like setting where code can immediately be shared and evaluated. Objective evaluation of performance; a winner is declared at the end and the code is completely traceable. 11 contests, with over 100 participants in each and over 1500 entries. Question: how does individual action in writing code impact community reuse of the code?
The two faces of innovation: generating new knowledge and reusing existing knowledge. Sources of knowledge for the individual: generate it de novo, generate novel combinations of existing knowledge, and borrow existing knowledge of others to solve the problem. Structuring the resulting artifact can be looked at in terms of complexity/modularity, as well as in terms of conformance to standards that may exist.
Experiment: a one week programming contest; you can view anyone’s entry and take their code and resubmit it with your own changes. Three phases: darkness, when you work on your own without any idea of who’s competing with you; twilight, when you can see how you rank compared to others and see who’s currently contributed the best entry; daylight, when you can see all the other code and modify it. Example: winning entry came from Yi Cao, who participated in the entire contest but showed stunning improvement at the end. Of its 545 lines, his winning entry has only 12 new lines; the rest appeared at least once in other entries, from 30 other authors. Heterogeneity of endowments: many contributors never had the “best” entry, but their contributions were still part of the best entry at the end.
There were typically several leaders over time. 4402 entries, of which 181 were leaders at some point. But the nonleaders were important to the final result.
6% of the entries become top performers. De novo code is typically very limited over time. De novo code is statistically related to top performance and to social value (reused more often). Novel combinations of existing code: also true. But if you borrow code from others, that hurts top performance, but the social value of your new code increases because people can understand your code better because it’s more familiar.
Complexity: the more complex the code, the higher the performance and the social value—but this is a small contest with only about 800 lines of code. The less you conform to standards, the more likely you are to be a top performer, but that doesn’t correlate with social value: conforming code has more social value because it’s more understandable.
There is alignment between individuals generating new code and new combinations and the value of the collective: free riding hurts individual performance, but it’s still good for the community. Transparency—being able to see the code—may be more important than complexity/modularity. If you can see it, even if it’s a jumble, you may be able to break it up into workable chunks afterwards.
Discussion:
Carliss Baldwin: Two views of innovation: large independent chunks, or distributed and componentized.
Pam Samuelson: copyright/patent divide—for copyright, you need a work, while patent can be more divided up.
Baldwin: But there’s still the issue of whether the creative process is one of an individual creating a separate thing.
Samuelson: but patents often cover single components, so it’s more incremental.
Kathy Strandburg: Patents are often talked about in the mode of the romantic inventor, but the literature does recognize that patents are regularly combinations of existing things.
Brett Frischmann: When we talk about patent pools, we should recognize that sometimes sharing tacit knowledge, information about demand, and other features are more valuable results of pooling than the patents themselves.