Elizabeth Townsend Gard presented Tulane’s Durationator project, part of the Usable Past Copyright Project. Designed to automate determinations of copyright duration given relevant facts. It will ultimately include foreign laws. It seems like a very promising project, though many of the facts will require a lot of information about the work and sometimes a legal judgment (e.g, was the work subject to general or limited publication), which is sometimes not available. The project’s aim is to require people to have to answer as few questions as possible.
First Session:
Gaia Bernstein, The User as an Inhibitor of Technological Progress
Increasing attention to users as innovators, but users are much more important as consumers: people deciding whether or not to adopt a technology. Technological adoption is often delayed by years because of user resistance. Thus, we should consider increasing regulation of user resistance.
Current literature: recognizing the user as designer, especially where capabilities are enhanced by digital technology and the internet. This doesn’t go far enough. Users have much broader roles: the couch potato means a huge amount to technological diffusion, deciding whether or not to adopt new tech. If innovation is to promote progress, it must be adopted.
What does user resistance mean? Often we think of active resistance, like protest against nuclear power. But there’s also avoidance, like consumers who won’t buy GMO food or writers who won’t exchange typewriters for computers. Resistance can be partial.
Case studies: artificial insemination (AI) in humans, email, videotext (Minitel)—technologies that were ultimately successful, but took many years to be accepted. First report of successful AI was the end of the 18th century. 1930s-40s: first significant use; delay due to social norms. 1950s: legal uncertainty (is it adultery? Is a resulting child illegitimate?). 1960s: court decisions and statutes legitimizing the use, specifying that the child is not illegitimate. Controversies continue with respect to use outside the husband-wife dyad, but the basic tech is accepted.
Email: the first email was 1971. The last technical milestone was achieved by the early 1980s. Why did it take so long to become widely adopted? Hypothesis: Social issues: critical mass required. (Lessons from the history of the telephone? There were lots of debates over adoption and a lot of propaganda on the part of telephone companies.)
Videotext/Minitel: French had this by early 1980s. Enabled shopping, travel reservations, chat, etc. Marketed worldwide, but rejected except in France. You need a critical mass, and the French government distributed free terminals.
Conclusion: there is need for more regulation of users’ adoption decisions to promote technological diffusion. How? Context-specific: earlier statutes might have helped diffusion. Email and Minitel: critical mass issues might be important.
My Qs: Look at digital television for an example of government attempts to regulate. How do you know which technologies to bet on?
A: The Office of Technological Assessment, which has been shut down, would be a good place to start.
Eric Goldman: We need the market, not the government, to pick technologies. He’d have the opposite conclusion: we should be reluctant to intervene when we don’t know what the long-run impacts are. E.g., 95% of P2P traffic is infringing today, but it might be different in ten years.
A: She doesn’t think there’s a blanket answer. Not intervening is a decision. (Yes, this is why I didn’t frame my question the way Goldman did, though we both had the same basic question!)
Deven Desai: This sounds like an account of technological path-dependence. Japan picks a technology, and when it works it really works, but the US advantage is that there are a bunch of options.
Wendy Seltzer: Maybe network-dependence is a divider of when government should intervene. AI: the utility of the tech depends less on how many others are already using it. Regulation might involve standardization, to promote a single network.
A: She’s working on these issues. AI is definitely distinct in terms of helpful regulation, but there may be some commonalities.
Q: Taxonomy of user resistance? Kinds of resistance: moral, practical (laziness/inertia), economic, social (are others using it), and philosophical. A taxonomy may help us decide when government intervention is appropriate.
Adam Candeub: 1996 Telecom Act: Schools & libraries program—billions of dollars subsidizing internet access. One reason why we want it is to create a market for innovative computer education applications. Identifying market-creating situations for government intervention could be useful.
My final comment: AI is a good example because it helps make the point that government is never “absent”: these kids exist; how will they be treated? Whether the government ought to apply special rules, or how general rules ought to be applied to a specific case, will often be in question.
Adam Candeub, Network Neutrality and Network Transparency
Network neutrality is about preventing certain abuses, in particular blocking content from unaffiliated or competitive sources. Example: Verizon blocks Vonage phone service. The central insight: we can’t assess the credibility of any of the arguments because we understand so little about actual network market relationships governing interconnection. Thus, disclosure of these relations is a first step before deciding on regulation and also is likely to serve beneficial goals in itself.
Discrimination happens all the time on the internet; we have to distinguish good (spam, phishing) and bad discrimination. Critics of network neutrality talk about property rights, market efficiency, and MATH! But if you crank through the models, they have key question-begging assumptions. And the models are vastly too simplified in terms of how the consumer relates to the backbone relates to the content provider, particularly in the peering and transit arrangements between various backbone providers. So what to do?
The claims made by law & econ types require more information about interconnection. Thus we should initially aim for transparency, not neutrality. Disclosure has served two roles: historically consumers or regulators. Internet disclosure won’t help consumers. There aren’t competitive markets for ISPs; most people aren’t competent to evaluate the information; and single-provider information isn’t helpful, because aggregate network info is what’s key. What about helping regulators? The FCC in its Bittorrent order has failed to set forth any understandable rule for what’s fair network discrimination and what’s unfair.
Now we should target the “internet vanguard,” people capable of understanding the complex interconnections at issue, tracking the incentives that motivate open software collaborations. Certain groups are trying to figure out the network structure. Kaminsky is a guy trying to figure this out, and the EFF’s switzerland project is similar. Technical tracking is being done by various projects. By requiring disclosure, these audiences would get useful information and either circumvent discrimination or at least fully inform public policy.
Q: Doesn’t the Bittorrent order have some disclosure elements?
A: They explicitly rejected a general disclosure requirement. Comcast’s act was well-known. What’s the rule for good/bad discrimination offered by the order? Spam, malware, virus controls could be equally suspect. (Not sure I buy this, unless you add in Zittrain’s analysis of the use of security threats to exert control over a network.)
Seltzer: Crowdsourcing investigation sounds great. Where does the crowd go when it finds a smoking gun or some disturbing practice?
A: If the agreements were public, Verizon would be less willing to engage in price discrimination, and less successful in doing so. Disclosure itself would often provide relief, even though we still wouldn’t have competition.
(This answer feeds my intuition that the perspective here is fundamentally anti-regulatory. Nothing wrong with that, but combined with his insistence that we need a way to distinguish good & bad discrimination that is predictive and extremely hard to capture, I’m skeptical that we’d find a regulation satisfying those constraints. And that is an argument against regulation, I agree, but I’m a little more willing to allow regulators to define good & bad discrimination as new kinds pop up.)
Lucille Ponte, Preserving Creativity from the Problem of Endless Digital Exploitation: Has the Time Come for the New Concept of Copyright Dilution
How can the US start protecting moral rights? Copyright dilution builds off of the notion of trademark dilution. The US has focused on economic rights as incentive; moral rights looks as a work as a commodity, but also as an embodiment of a person. In the US, copyright often goes to a big company. (I’ve been thinking about labor value; Marx would say that all products of labor (ought to) embody personhood, making creative works nothing special in that regard.) The most important moral rights: (1) attribution/paternity; (2) integrity (prevent destruction, distortion, alteration, derogatory presentation). Without integrity rights, a songwriter/performer who’s transferred his copyright couldn’t prevent the use of his song in The Prince of Tides during the scene involving the rape of a child (two children, actually).
Why protect moral rights? Infringement alone won’t prevent harm to the creator. Tom Waits therefore is very active in Europe to prevent uses of songs mimicking his voice and style in ads. He wants to protect his brand, and he was able to do that in the EU, but not here (comment: except for the Waits case in which he did win a publicity claim.) Our artists can get protection in Europe, but we’ve ignored our international obligations. Technology breaks down national barriers and digital tools make manipulation and copying easier, making attribution even more vital. People create for internal, creative impulses; people want control over a creative product once it leaves their hands. Moral rights promote future creativity and expand consumer choice.
VARA is a pathetic attempt to meet our obligations: narrow, waiver-friendly, and not often successful in court. Obstacles to expansion of moral rights: fear of chilling effects on owners of economic rights; fear of increased censorship; fear of disruption of preexisting contractual rights; lobbying by media groups; congressional passivity.
The paper suggests adoption of trademark dilution for copyright, which balances the interests at issue in a different way. (I’m not sure that trademark dilution created any costs with respect to the owners of economic rights/existing contractual rights, because by definition those were already within the control of the trademark owners, so that strikes me as an initial problem with the analogy.)
Her proposal: copyright blurring would be like the right of paternity. Without attribution your style would no longer be distinctive, so the user needs to give credit to the creator. (I don’t buy it: in many cases attribution comes from mere quotation, especially if you’re using an audio or visual clip. If it’s distinctive, it tends to stay distinctive in quotation or reference.)
Tarnishment would be the right of integrity.
She’d apply the right to the same works for the same duration as copyright. Limitations: apply the rule only prospectively, so that it wouldn’t harm existing contractual relations. Fair use, public domain, and works made for hire would stay the same. Rights would require reasonableness, as in France. Limited waivers for specific uses would be allowed, not blanket waivers. Like trademark dilution, relief would be injunctive only except in cases of willfulness.
Introductory notes: As you’d expect, my reaction to this proposal can’t be anything but negative. In that sense, most of my criticisms are probably irrelevant to the project internally. A couple of comments that might be more useful: Ponte needs to explain the relationship between the derivative works right/substantial similarity and a right against blurring; my reading suggests that she thinks blurring goes beyond substantial similarity (she says that her proposal makes it possible that the result in Newton v. Diamond would be different) but I’m not clear how far, e.g., competing reality TV formats; books attempting to cash in on the popularity of The DaVinci Code or Harry Potter, etc.
Likewise, her proposal to borrow trademark dilution principles wholesale seems underthought with respect to “noncommerciality” in particular. It is clear that trademark dilution “noncommercial” is First Amendment “noncommercial,” and thus a much, much broader category than copyright “noncommercial.” Trademark dilution claims against the use of a song during a rape scene in a movie and against a sample of a song in another song—two examples she gives of how her proposed law might well allow the original performer to suppress the second work—would fail at the beginning because neither are invitations to engage in commercial transactions. So she actually wants to import trademark dilution, except for trademark’s definition of commerciality. This requires some revision of her discussion of the remaining free speech safeguards in a copyright dilution law.
Relatedly, her proposal seems to contemplate a distinctiveness requirement—only works with a certain amount of “inherent” or “acquired” distinctiveness may be protected against dilution, but any consideration of the merits of that proposal needs to discuss (1) why a fame requirement should be rejected and (2) what “distinctiveness” might mean with respect to copyrighted works, for which we generally only discuss “originality.” If not all original works are inherently distinctive, how would we know which ones were? In trademark, distinctiveness serves a very different function than originality: it is how we know that a signifier is connected to a signified. Here the work appears to be both signifier and signified in the analysis; so we need to ask, distinctive of what?
Brandy Karl: So what do you mean by distinctiveness?
A: A registry, possibly. An expert could testify to distinctive style.
Karl: That’s a reason why TM litigation is so expensive as compared to copyright. Operationally: how does blurring interact with independent creation? I could independently hit on a style, which copyright would not protect now but trademark would (intent being irrelevant). Separately, does tarnishment provide any protection above and beyond the scope of fair use? The Prince of Tides case suggests a commentary on child rape.
A: On blurring: if you’re the earlier creator, you’d have the right to prevent blurring but only based on copying. Tarnishment cases won’t all be clear-cut in favor of the creator.
Bernstein: Why won’t the same political constraints apply to adoption of copyright dilution?
A: We recognize harm beyond infringement in one area, and that gives us a model for recognizing it in copyright. Not saying it’s an easy sell.
Goldman: Christina Bohannon gave a talk on copyright dilution, you might want to look at that. Not clear how blurring would apply in style: the Satava/glass jellyfish case (won by the defendant/copier) is all about style. In general, Goldman is uncomfortable with the paper overall: Santa Clara’s TM dilution conference (scroll down) brought together dozens of people who were uniformly concerned by dilution. This seems like propagating a bad meme in a new place.
Desai: Descendability is an issue: if it’s so tied to the individual, why should it be descendable? This is an example of a broader issue of translation: copyright and TM are different, and we need to examine those differences before importing one concept from one area to the other. These arguments are increasingly common, and thus Ponte is on to something felt strongly, so it’s worth investigating, but not necessarily worth acting on.
Comment: Perhaps there’s increased attention to attribution because of the transition to a gift economy where attribution seems like the only thing a lot of people can get.
Goldman: Might also consider §1202 as a moral rights provision already existing in copyright law; might do enough to create an attribution right already.
Friday, October 03, 2008
Sixth Annual Works in Progress Intellectual Property Colloquium at Tulane
Labels:
conferences,
copyright,
trademark
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment