Monday, April 08, 2019

U.S. Copyright Office, Section 512 Study Roundtable


Opening Remarks
Karyn A. Temple, Register of Copyrights and Director, U.S. Copyright Office
Nat’l and int’l changes since 2016 roundtable—looking for updates.  [Congrats on her first official event as Register!]  Tale of two cities: very different perspectives on how DMCA is working/not working—have those perspectives changed on the voluntary or caselaw or int’l side? 

CO: Regan Smith, general counsel
Brad Greenberg
Kevin Amer
Kimberly Isbell
Maria Strong

SESSION 1: Domestic Developments          
Erich C. Carey National Music Publishers' Association: BMG v. Cox is important: opportunity for successful importance of plain language where service enabled repeat infringement on massive scale. But music community hasn’t changed its mind about DMCA: that was an extreme situation with millions of notices sent and $8 million in fees. Not feasible for enforcement—heavy burden for major publishers and individual creators. Enforcement system is gamed to confuse notice senders.  Rigged system. DMCA was supposed to help development of fledgling internet; service providers and © owners would cooperate to deal w/infringement. Has helped to create some of most powerful companies, but onus still on © owners to police. Building has been built; time for the scaffolding to come down. Look internationally.

Ken Hatfield Artist Rights Caucus of Local 802 of the American Federation of Musicians: Unfair loophole that allows profit from mass infringement. Litigation alone won’t solve the problems with the safe harbors. At odds with Congressional intent: neither active cooperation w/platforms nor STMs have materialized. Reform is needed to restore rights/livelihood of musicians.

Mike Lemon Internet Association: over 40 of the world’s leading companies.  DMCA works in encouraging creation and dissemination. 

Mickey Osterreicher National Press Photographers Association: recent study estimates that more than 2.5 billion works are stolen every day, 23% in the US. A takedown notice is the only alternative photographers have. But notices are encumbered by Lenz fair use considerations [the horror], counterclaims, and counterclaim nightmares. New EU obligations for OSPs should inform our conversations.

Jennifer Pariser Motion Picture Association of America: if you were wondering if anything has changed in 2 years, these intros let you know. Cases in the last few years about repeat infringers are promising but infringement continues to devastate the industry. [See The Sky Is Rising for some actual numbers.]  Takedowns have marched on w/o red flag notice or representative lists.

Meredith Rose Public Knowledge: vast and delicately balanced body of copyright law; 512 is just a part. We must reckon w/broadband, 512, and SCt’s Packingham decision recognizing a 1A interest in being able to speak and be spoken to online. 50 million Americans have only one broadband provider—accusations shouldn’t be enough to cut them off of access. That affects knowledge standard for secondary liability.

Aws Shemmeri ImageRights International, Inc.: LiveJournal decision is a step in the right direction—scrutinize relationships ISPs have w/user communities. Interactive/curated relationship w/users benefits them, leaving out content generators. There’s still a circuit split and so case law alone won’t resolve it.

Rasty Turek Pex: Technical challenges: rightsholders bear the cost of takedowns. Even if there’s a tech solution, platforms push against active measurements like crawling to ID content. As such, there’s a disbalance. Platforms have to be more accountable.

Rebecca L. Tushnet, Organization for Transformative Works: The case law tells us the same thing as the UC Berkeley study of takedown practices: There are many successful models out there, and even very big sites like ours with very active creators and millions of works can receive very few legitimate takedowns. Amazon’s Kindle Worlds, for example, mostly receives anticompetitive takedowns from competing writers [reflecting the difficulty of fighting back at the individual level, only one 512(f) case of which I’m aware, Quill Ink, has been brought based on a Kindle Worlds takedown].  Generally, 512 and its implementation by different platforms have encouraged an explosion of expression; by contrast, rules written as if YouTube is the model would crush the alternatives and ensure there was only YouTube.  

[citation: Testimony of Stephen Worth, United States Copyright Office Section 512 Study, Public Roundtable, May 13, 2016, at 248, https://www.copyright.gov/policy/section512/public-roundtable/transcript_05-13-2016.pdf (“[W]ith Kindle Direct publishing, authors routinely try to climb to the top spot in their category … by issuing bogus notices against higher ranking titles. And this for us actually accounts for more than half of the takedown notices that we receive.”).]

[AO3 is the 316th most popular US website, according to Alexa.]

Brian Willen   Wilson Sonsini Goodrich & Rosati: DMCA works and continues to work. Basic bargain is the right one. Fosters cooperation: real obligations on platforms but main burden is on © owners who have the most knowledge of their works and benefit from them.  Motherless case in 9th Circuit: example of getting it right. Real sites that are home to original works thrive, while piratical sites mainly encouraging/inducing infringement have faced consequences.

Mr. Winterton NetChoice: DMCA applies obligations to least-cost avoider.  Don’t have to be aware of all © content; a cottage industry of monitoring services helps with this.  512 has empowered platforms for artists and all Americans to express themselves. W/o 512 we would get lock in for major services.  Europe’s Art. 13: must know every piece of © content in existence. Protect American innovators, artists and platforms: lead in opposing these efforts.  US should work to incorporate 512 in trade agreements to protect free speech and creativity.

Smith: Pariser mentioned repeat infringer. Are Cox & Grande & Motherless right?

Pariser: the first two were correctly decided as far as they went for repeat infringers.  Contributory liability/jury instruction part of Cox they have an issue w/.  Why are these bright spots? B/c a court said the DMCA means what it says, and that hasn’t happened before b/c courts have not required a representative list or applied red flag notice. Repeat infringer = must act on multiple notices for the same user, ending in terminations.  Motherless: mixed bag.  We take issue w/ the notion that any kind of policy that a service can dream up is ok—written, unwritten [this was a one-man ISP, by the way]—most troubling part is that the operator doesn’t need to keep track of the notices.  He used his memory.  The good news is that he actually terminated 2000 individuals. 

RT: Motherless is important b/c it deals w/ the incredible variety of sites out there. This is a one-man shop; if he has a server failure & loses all his records, he shouldn’t lose all DMCA cases in the future.  Flexibility in what is required is important.  Even big sites like AO3 receive very few DMCA notices—there is a big variety of sites out there and not just in the small/long tail segment of the market.

Smith: is there a bare minimum on a repeat infringer policy?

Willen: the courts have focused on strikes; that creates a clustering. For repeat infringer policies, you want to get bad users off the site but you also want to educate users who are fans v. pirates. Flexible policies can use first/even second strike as vehicle for educating users.  You can also be attentive to consequences of loss of broadband v. loss of access to a site.

Carey: Industry perspective: Uphill battle getting these cases off the ground to reverse engineer an ISP’s own infringer policy. Requires massive discovery and tech knowledge.  [Note how this implicitly treats a subset of ISPs as the full set of those who have and need DMCA policies: even if it’s true that the cases that get to discovery are complicated—in significant part because small sites and even big ones like Veoh buckle under litigation costs—that doesn’t mean that he’s diagnosed a problem with the structure of 512 or that an alternative would be better for the system.] You’re lucky to be able to litigate.

Smith: so should the burden be on the © owner?

Carey: No. 

Smith: does the newer case law shift the burden on repeat infringers?

Carey: no, just a proper balance according to what the statute intended. If these circumstances (Cox and Grande) weren’t failure to enforce repeat infringer policies, nothing would be. 

Hatfield: IANAL, but having different standards for an ISP individually run sounds reasonable, but not if it’s applied to the giants. [Note that a small ISP can have a big footprint, like our all-volunteer site.] Solutions should be focused on upload filters—make sure all music has IRC codes.  Cloudflare gives complete anonymity to users. Cost of litigation--$1500 to $3000 for a takedown, while litigation is up to $2 million. It’s virtually impossible for musicians to do it. Prime earning time for new music is 18 months, but cases are slower than that.  512 implemented/interpreted in ways that create fertile ground for dragging cases out.  [Of course all of that precedent came from P-favorable rulings that made it impossible to resolve these cases early.]

Rose: 512 applies to broadband and platforms, and those have very different stakes.  We don’t often say that ISPs are our friends at PK, but it’s US policy to increase access to internet.  To eject someone from their only broadband network is a very serious issue, and Packingham recognizes the profound First Amendment interest just in access to social media.

Amer: 512(a) and (c) have differences. Pre-Cox, we heard from 512(a) ISPs that their practice was to reject notices under (c); Cox obviously casts some doubt on that practice. Does anyone have a sense of which practices have changed in light of Cox/other cases?

[Nobody knows.] Carey says there’s general sense that practices have changed, and Charter case is currently being litigated/litigation against others is ongoing. Is it effective means of enforcement/repeat infringer policies? Still figuring this out.

Isbell: Rose says Packingham indicates 1A interest in access. Do you see terminations pursuant to a repeat infringer policy as being state action?

Rose: not state action directly. But you must in order to avail yourself of a safe harbor. As a practical matter it becomes equivalent b/c the potential damages are so big. There’s some gradation.

Greenberg: voluntary measures negotiated in the shadow of 512?

Rose: policy concerns there.  Packingham: sex offender registry, and still not good enough to cut him off of social media access entirely.

RT: NYT v. Sullivan: the scope of the rights the state enables have 1A implications b/c the judiciary is a state actor.

Pariser: an appropriate repeat infringer policy takes account of the statutory command that termination should be in “appropriate” circumstances—you can take into account the nature of the service. Policies can vary provided that they are actual policies.  First Amendment: repeat infringer obligation doesn’t implicate 1A concerns b/c there is no state action; unlike in Packingham, though there are some rural areas w/a single provider [50 million people!], in general termination from one ISP isn’t a death knell.

Smith: LJ v. Mavrix case. 

Shemmeri: Prior to that appeal, there wasn’t a lot of success against non-pirate-oriented ISPs. This decision, on the heels of BWP where some users were deemed independent contractors—this case rightly held that editorial posts/staff uploading their own material have an intricate relationship w/users in which they’re curating the content, seeing that it’s favorable/profitable on their end.  Sites are profiting from the content and there is some review, so it’s natural not to give 512 protections. [Note collapse of vicarious and contributory liability: exactly the problem, where you get one from column A and one from column B and that’s enough.]

Willen: any pre-upload moderation should not take you out of 512(c): Motherless helpfully clarified that pre-upload review and moderation to look for illegal material, material that doesn’t fit w/in the service. 

Smith: would it make a difference if they screened only for cute cat videos/banned only cute cat videos?

Willen: it shouldn’t. We know from 230 that Congress wanted and encouraged OSPs to remove inappropriate content.  The idea that services that are doing exactly what 230 encourages should lose 512 protection isn’t good for society, for users, for copyright owners.

Amer: Mavrix’s standard: if the ISP’s activities were narrowly directed at enhancing accessibility of the posts, that’s still at the direction of the users.  Is there any room for curation w/in that standard?  Kicking out cat videos.

Willen: there is and there has to be room for curation.  Viacom case: the use of related/suggested videos. That’s a form of curation/moderation—you like this, you may like that. More broadly, every service now does some form of “curation”—what we mean is some effort to help users sort through a mass of UGC and find things they like. The idea that you shouldn’t be able to do that and have safe harbor protection means we get a bunch of junky, useless sites [the 230 point is really strong here].

Amer: Scalia’s Aereo dissent: isn’t it an administrable rule to say that if someone is choosing the content, that will ordinarily tip them into direct infringement?

Willen: distinguishes LJ: people are submitting things but they don’t go live. The ultimate decision about what is posted is made by the platform.  That degree of ex ante selection makes you a traditional publisher.  That does start to put pressure on 512(c). But there’s a fundamental distinction b/t that and sites that essentially let people mostly put stuff up that they want and then performs sorting operations after that.  At the same time, those services are increasingly saying we don’t want terrorist content, porn, etc. whether or not they’re “legal.” The idea that making those kinds of selections jeopardizes safe harbor is very very troubling.

Isbell: reading 512 in a way that negates 230 doesn’t make sense. But Congress explicitly carved out IP from 230—so shouldn’t the approach be different?

Willen: 230 is relevant even though it’s not applicable to IP b/c it clearly says Congress gave ISPs a right to/encouraged them to remove content b/c they find it objectionable whether or not it’s legal. 

Smith: can you reconcile that w/UMG’s statements about active involvement in content selection?

Willen: there’s language in the cases that goes both ways, but no case of which he’s aware holds that by making decisions about what’s good/bad content you fall outside of 512(c).  That interpretation is inconsistent w/incentives Congress tried to provide in 230 and w/public policy generally. What kind of internet do we want?

Pariser: Objects to the notion that “a moderator curating content implies no safe harbor” is bad for content.  Now all these sites that would otherwise have been filtering content stop doing it—that’s not true! The reality is that nobody is curating for copyright at this moment. They’re picking and choosing content that they like & do not like for reasons of their own.  Porn/violence/low quality files, but infringing content can stay until there’s a takedown notice. The notion that a service provider would lose safe harbor seems entirely right if they demonstrate that it is going into the content that is being supplied by users & picking & choosing among those files, it should have the obligation to go after infringing files.

Smith: where would you draw the line? Pre-posting or post-posting?

Pariser: no distinction. If you choose to curate, that is the moment you need to filter for infringing content. Disagree that you end up w/ a lot of junky sites; you end up with a lot of sites w/filters that are inexpensive. 

There is a continuum.

Amer: did Motherless get it right?  It would create bad incentives to say that a site that decides to screen out the worst content loses the safe harbor. The court distinguished Mavrix, which is much more focused on choosing the content.

Pariser: makes perfect sense given 512. Part of our position is that the court started veering off the correct interpretation of 512(c) and it should always have been the case that if a site demonstrates that it can control its content, it should be filtering. Given that the law didn’t develop that way, Motherless makes sense to distinguish b/t truly curated and more pedestrian filtering for child porn.

Greenberg: say we have a new 512. Your position is filter for anything = must filter for © too?

Pariser: demonstrates the ability of the site to filter.

Lemon: content moderation is a very difficult subject. Vast majority of content moderation is fueled by users flagging objectionable content, which is largely the way the DMCA also works. Platform’s resources may be enough to give it proactive content moderation ability, but the idea that they can take a child porn hash and that using it would thereby trigger a © filtering requirement is really problematic.  If you can filter, then you must—then we have to fight over “can.”

Smith: Zazzle case: putting it on a physical product. Is that different?  Line at a physical product?

Lemon: there are different legal implications if you proactively take a © work and market it on a physical product.

Smith: is there a difference b/t that and marketing for eyeballs?

Lemon: complicated—depends on volition/human involvement in making decisions. Much of what platforms do is automated. Those processes don’t always make the right/best calls; they’re not human, and humans also error. That’s why we have a back and forth process with user, platform, and © claimant. We need to take into account the sheer number of things posted—reddit, between 2016/2018, had 625% increase in takedown notices.  It’s a very quick ramp-up.

Greenberg: what about having thresholds for size/staff?

Lemon: first, companies ramp up very quickly.  GDPR example: they get big quickly. All the metrics are terrible.  Monthly users: varies month to month.  More than 2 moderators: you’ll never hire the 3d. 

Winterton: Filters are not inexpensive. We were told that internet sales tax programs would be cheap. That’s not true when you rely on it for your business/have to integrate it into the rest of your systems. Also true of filters. Larger platforms can and do make different efforts. 

Hatfield: the people is monetizing the content, not whether it goes up or not.  YouTube has different rules for different artists.  STMs: once available, company can’t block—if Google has developed the technology, it can’t be that expensive to implement it.

Carey: software costs are prohibitive to © owners, and we’re also deprived of ability to send representative list, red flag is read out of the statute, so we have to go URL by URL for each piece of content. 

Osterreicher: NMPA encourage musicians to put code in; we encourage photographers to watermark; you should be able to recognize a watermark on an image.

Smith: would a platform have an obligation to screen for watermarks?

Osterreicher: at a minimum, yes.  Metadata often stripped out, but watermarks are hard to do/should be obvious to anyone that someone owns it and who that someone is.

Smith: are you encouraged about standards for photography?

Osterreicher: the tech is getting there, and hopefully will not be able to separate the info.

Greenberg: how would the ISP know whether the use was licensed? If it was my wedding, how would they know it was ok for me to upload my wedding photos?

Osterreicher: that’s a problem, but the service should be able to recognize there was a watermark.

RT: Specific child porn hash values from known images are different from finding a watermark. New Zealand shooting gives us a tragic example of how that generalized “ability” to filter has been vastly overstated.  If you want a law regulating Alphabet on antitrust grounds and governing how YouTube can treat musicians, the Department of Justice knows how to do that, but mandatory filtering is not the right legal tool.  Our site will terminate users for harassing other users and for engaging in commercial solicitation.  We get well under 10 DMCA notices per year for millions of works.  We are not curators; our users are curators.

Greenberg: a watermark as red flag?

RT: we don’t filter. So we wouldn’t see a watermark. And our users might well put their own watermarks on their photos so that when their cosplay pictures show up on Instagram they get the attribution—we shouldn’t have to go to war against our users. And Google won’t sell us a filter.

[A dialogue on red flag knowledge.  I resisted the idea that you could get much guidance from extreme examples, like harassthem.com/stolencelebritypics.com because that’s not what most people are doing.  It may be a very small set and hard to generalize.  And it's natural that red flag knowledge is hard to generalize--for example, a full length movie on Dropbox is perfectly likely to be a legit backup of a purchased movie, which is what I do for my iTunes purchases because of bad past experiences.  A video that gets 10,000 hits in an hour might be a video of a recent police shooting.  It really depends on all the other facts & circumstances.]

Smith: has red flag knowledge been read out of the statute?

Lemon: no opinion.

Smith: if you can filter, should you filter?

Lemon: there’s a lot of collaborative work. Some of our companies have won Oscars, Grammys, Golden Globes—our interests align in important ways to figure out best practices.  We don’t think our best practices should be the law for everyone b/c it doesn’t make sense for different platforms.

Osterreicher: wedding photos—if you put a watermark on, and it’s your image of someone else, there should be a standard that would trigger further investigation.  [Can I make him answer the angry user emails?] [And by the way, “find a watermark and an image of a person” is a very different machine learning task than “match the hash value for this entire image.”]

Turek: The technology is there: ContentID is not the state of the art.  [He sells the technology.]  There’s not much left—671 hours of content uploaded to YT every minute, growing 100 hours a year.  Eventually, you have to find a way to deal with it.  Once you engage in one kind of filtering, you should be forced to look at the others.  You can’t have innovation in isolation.  Rights holders used nontechnical POV on measures, and picked the most obvious ways, but you can’t get the state of the art w/o the backing from more than rightsholders.

Willen: shade thrown at red flags is being thrown at Viacom.  Isn’t the specific/general distinction.  Subjective/objective is the line. Red flag: the facts and circumstances would lead a reasonable person to find infringement.  That’s not reading it out of the statute.

Smith: what’s red flag w/o a notice?

Willen: every court that has looked at it has come to the same conclusion: this is a narrow provision. It’s in the statute, it just doesn’t happen very much. Legislative history: congresspeople said it means something apparent from a brief and casual viewing. Subjective or objective knowledge standard is in fact narrow—the main vehicle for removing content was never meant to be unilateral ISP action, but cooperatively. And it reflects that these determinations are very difficult, not like figuring out whether something is child porn; requires knowledge that ISPs very rarely have. Having a watermark doesn’t distinguish photos from others on the internet—[almost] every photo on the internet has a © that belongs to someone, but that just starts the inquiry.

Amer: what if a user has a username “PiratedSongs”: is that even red flag knowledge because it’s not specific enough?

Willen: YT case has a huge factual record showing that a number of clips like “leaked song” had actually been posted by © owners/their agents as part of stealth/viral marketing. Concrete examples to show that some video description is not a very good guide.

Smith: if the standard is objective, isn’t Stolen Sgt. Pepper enough to investigate?

Willen: sure, if you find a full-length movie. But this conversation isn’t about those examples, but rather about an attempt to say that courts are getting it wrong when they say that red flag is narrow.  That view fundamentally ignores the reality of what’s on these sites, almost all of which is copyrighted.

Isbell: are you presupposing that YT doesn’t use its own site?  If you type in Beyonce, a lot of lyric videos, many of which aren’t put up by the record company.  [Isn’t that … covered by YT’s licenses?]  Pinterest uses a lot of images from other sites.  Since I am likely to pin other people’s sites and not my on why shouldn’t Pinterest know that?

Willen: YT is licensed at this point.  Some of these issues on bigger platforms have been dealt with.  Pinterest generally (he represents them): the other part of the equation is fair use.  Social bookmarking has a big fair use component.  Using thumbnails/versions can constitute fair use.

Pariser: Goal keeps moving from content owners’ perspective.  Porn: the P says you should have known that it was infringing b/c it was so well produced.  Court disagrees. Court holds up professionally produced studio movie as paradigmatic example of what would confer knowledge. But when a Marvel movie is the subject, there’s some other reason it wouldn’t be sufficient notice, such as lack of ID’ing a particular file.  YT involved unlicensed, full length music videos for which the site didn’t get specific URL notices. Have to understand that in the context of representative list.  Zazzle: sent a catalog of photos and the court said that wasn’t good notice.

Shammeri: we don’t discourage use of © notices in works. ISPs w/human curation can retain red flag knowledge—celebrity/historical photos where it’s obvious they don’t own © to an image in the 1970s or 80s.  That raises a red flag of very likely infringing or not owned by the user.  [Those are not the same things.]

RT: a brief note on repurposing sites: You don’t know what your users will do.  Pinterest & vaccine denial/political use of Instagram—be careful you don’t assume what sites are for. 

Smith: but what about Isbell’s point that eventually you know?

RT: There are a bunch of different YouTubes.  [A better answer would be Mao’s purported answer about the effects of the French Revolution: “too soon to tell.”  Instagram and Pinterest are still figuring out what kind of sites they are, and having struggles with, e.g., political content and vaccine denial content, and the fact that Pinterest is a particular kind of site for you doesn’t mean it’s the same site for anyone else.]

Representative list: we get a search string that’s dynamically generated and looks different when we look at it. We get a claim listing one photo that says the entire [X] fandom is infringing.  This is not one-sided as a problem.

Hatfield: ISRC codes are good.

Osterreicher: this is a tale of two takedowns. Plight of individual creators.  [Including ours, BTW.]

No comments:

Post a Comment