Friday, August 10, 2018

IPSC plenary


Tonya Evans, CryptoKitties, Cryptography, and Copyright

Nonfungible digital creativity, enabled by the ERC 721 standard on the Ethereum blockchain: verifiable, indestructible digital scarcity.  NFTs overcome failures of existing digital structure for creators by providing for ownership, provenance, chain of title.  The inability to distinguish master from copies currently harms demand/value.  Creator can participate in secondary market, unlike current first sale situation.  Can prevent forgery/theft unless the owner’s private key is compromised or lost.  Securely distinct asset from a reproduction.

Sprigman: as I view the CryptoKitty, it’s just pixels on the screen. Why can’t I just copy it? If it’s valuable, it’s the picture they care about. Blockchain is valuable as a speculation market, like tulips. The only difference here is you don’t get the tulip. That’s interesting, but does this new thing prevent piracy?

A: On the NFT side of things, where you look for provenance/uniqueness. If you just want the copy, it might not matter. But blockchain can establish a record of ownership, assisting in enforcement.

Sheff: proper analogy then seems like fine art market. Scarcity is useful to increase prices. W/r/t a digital image you can have anyway, which can be disaggregated from the blockchain record, you’re just creating scarcity for its own sake/conspicuous/wasteful consumption.

A: wholly digital from the beginning; not a representation but the thing itself.

Q: what’s the benefit of doing it this way?

A: decentralization; censorship-resistant.

Deepa Varadarajan and Joseph Fishman, Similar Secrets
Normative case for necessary similarity b/t secrets given that employees can’t wipe their minds between jobs. ©’s similarity framework makes some useful analytical moves. Like © but not patent, trade secret scope isn’t defined ex ante by written claims.  Trade secrets are easy to obtain on the front end, leaving the difficult scope work to be done on the infringement end.  (1) In terms of what courts should be looking at, focus should be end that the D is actually exploiting, not R&D along the way; (2) only material contributions should be actionable; (3) only reasonably foreseeable uses should be actionable. Shouldn’t affect core markets, but greater room for cumulative innovation in a world of mobile talent.

Our focus is use based misappropriation, not improper acquisition. Restatement says derivation must be substantial, but very few courts explain how to do this. A number of cases essentially ignore this in favor of a P-friendly conception of use in which any aspect of any reliance on the secret in R&D is enough. Courts don’t ask whether what was copied was the stuff that made the trade secret protectable to begin with.

Ds shouldn’t need to repeat known failures in their R&D.  Courts should look for a material contribution from the P’s secret—not just some benefit. Requires normative evaluation of whether the information D exploited was important to making the secret protectable in the first place. Contrast © which says copying isn’t enough; copying has to be significant/of protectable elements. Today’s Ps can prevail even if they aren’t competing with D; we think that competition in a relevant market should be required. Proximate cause for unforeseen benefits instead of unforeseen harms.

Q: similarity to © isn’t a good thing b/c of lack of ex ante definition problems. Why not use obviousness instead?  We know how to worry about hindsight bias in patent better than in ©. Here’s what you knew, here’s what you had reason to try/look for.

A: not all trade secret eligible info is patentable. Very different definition of trade secret would be required.

Lemley: the things you’re drawing from © are not from substantial similarity; they’re from fair use. Foreseeability etc. You are really saying trade secret should allow some productive use/fair use. Might be harder to read that view into the statutory term “use” though.

A: current actual copying requirement looks like an actual copying Q in © shorn of the improper appropriation next step.  Whether it’s substantial similarity or fair use, that’s b/c substantial similarity has grown to encompass so much more than it once did. The Q is still: are the two products/activities similar enough to be liable.

Lemley: but you’ve moved from intermediate copy to final copy, and the way we do that in © is fair use.

A: doesn’t think script/screenplay cases work that way [software cases do, Lemley notes w/o disagreement]. [Script/screenplay cases are weird and probably also affected by lingering questions not worth resolving about what constitutes a “work” and how many different works are involved when you have multiple drafts.]

Kristelia Garcia (and Justin McCrary), Reconceptualizing Copyright’s Term
Nielsen soundscan data for music. Will suggest extrapolation to other commercial info goods like books and movies, not to fine art. Findings best fit the average work—some of the potential suggestions may overprotect some works and underprotect superstar works. Random date-stratified sample of 1200 albums released b/t 2008 and 2017, proportional to genres, w/physical and digital album sales/streaming.  Then looked at all songs w/in random sample of 120 albums.

Unsurprisingly but dramatically, lose 1/3 of sales volume w/in 2 months, ½ by 6 months, by less than one year out 10% of initial volume. Songs decay more slowly. Album sales drop to almost zero after less than a year, but songs have a somewhat longer commercial life though nowhere near as long as current term.

Streaming: analysis is more limited b/c data are limited; main streaming services didn’t make data available until much later. 2016-2017 subset, sample size too small for statistical significance—streaming results in far higher volume. Ave. streaming volumes drop less rapidly than sales, as you’d think intuitively.

For typical music production, rapid dropoff for sales.  Later will look at genres and differences b/t blockbusters and others.

Implications: shorter term? All info goods engage in windowing—single first, then album exclusive to one service; similar in film (theaters, streaming, TV) and books. All of them go quite quickly relative to the term. Also consistent w/winner take all phenomena.  Albums are quickly hits or not, like movies and books. Also consistent w/network effects.

Maybe duration isn’t that important either way.  Perhaps this is useful for countries still on the fence about Berne/duration. This might tell them that extension isn’t efficient. Even adopts support for use it or lose it standard a la Posner & Landes. Could also try other things like reversion to the author [or limits on remedies].

Rothman: the spike is probably related to ad campaigns—can you look at that for blockbusters?  Could you argue that precipitous dropoff means that you need a really long term to recoup/make a living? Landes & Posner argued about distribution rights & commercial difficulties/investment—but streaming doesn’t have an additional cost, so far-out sales/revenue are opportunities for artists to benefit many years later.

A: Germany has something like this on the book side; if use it or lose it considered availability on iTunes, then that’s easy/costless. We’d need some sort of marketing standard to get some transfer back to authors.

Q: effect of illegal copies?

A: usually available through entire window and even before—so it’s hard to say but would buy an argument that this mattered.

Sprigman: §115 licensing/Harry Fox—given the dropoff, Harry Fox issues affiliate licenses on match, then processes, then pays out in arrears. So this may have implications beyond term.

Q: Streaming curve looks like a survival regression, with an uptick in song streaming at the end—the stuff that persists and remains listenable/popular with everything else fallen away.  [How much does the algorithm affect this?  If it surfaces the same relatively unpopular songs for every listener w/in a group, that might matter v. if it randomizes w/in a group of equally unpopular similar songs.]

Rachel Sachs, Regulating Intermediate Technologies

Health care tech where we want to incentivize improvement of existing tech but we’ve set up the system wrong. E.g., microbiome; pharma manufacturing (not much has changed over the past 50 years, even though new drugs are being created—still using batch manufacturing instead of continuous, for example, even though that could reduce cost by up to 50% and even though batch mfg can lead to delays/shortages); genetic testing. If we overregulate microbiome based therapies (which are unpatentable but require FDA approval b/c treated like drugs) then we might not get the end stage technology and never know what we’ve lost.  FDA knows its own regulations lock old manufacturing in place, and even though companies can obtain patents on mfg tech, the difficulty of enforcement is a disincentive.  Why did Myriad go differently, where the test for breast cancer associated genes has become a lot better in specificity?  B/c in the early days it wasn’t regulated.

Need to discuss how patent doctrines/sequential innovation literature interacts w/regulation from FDA.  Designing best practices: public investment in platform tech, calibrating regulation by stage at which the research is, calibrating health insurer reimbursement by stage. Procedurally, greater executive branch coordination and engagement w/regulated industry as well as public.

We’ve hamstrung insurers from calibrating reimbursement based on evidence of efficacy in groups/subgroups.

No comments:

Post a Comment