Friday, March 15, 2013

DMCA conference: 512 operations

Panel 5: 512 Operations

Moderator: Jennifer Stanley, Fenwick & West LLP

Daniel Seng, National University of Singapore/Stanford Law School

Studied Chilling Effects data. Looked at every single notice to Google in the database: a population of more than half a million notices.

Substantial year to year increases.  BPI is #1, but also the adult entertainment industry is highly ranked.  (This research would be illegal in Singapore!)  Music industry is the bulk of takedown notices: 58.5%; then adult entertainment, then movies.  If we rearranged by reporters, interesting patterns emerge: BPI stays #1, and it doesn’t use agents. Most other companies use reporting agents.  2012 was the year they started reporting huge numbers.  Also, a lot of reporters who don’t have names—individual names are redacted by Chilling Effects, so it seems like a lot of individuals are filing reports. 

How have the notices been sent?  Mostly web forms: now 98.6%.  Email: 0.8% today; mail, fax, and other are also small.  If you want your takedown of a URL to be accurate, don’t put it on paper.  Many notices have multiple requests per notice—1000s at once.  Something is right with the process: makes it more accessible to individuals not just large companies.

Bulk of the notices are directed at search; Blogger is still reasonably active and targeted by takedown notices.

Compliance with formalities: a substantial (over 8%) percentage are missing required information about the work (or other requirements).  Sample of NBCUniversal’s erroneous takedown requests: almost 122,000.  Marketly.com: more than 250,000 Microsoft requests.  Robots/software are probably causing simple mistakes like this.

What about hosting sites that closed after Megaupload takedown?  Takedown requests to them are erroneous because they don’t exist/host files.  Over time, the numbers didn’t go to zero, but were quite large.  Assuming it takes 90 days for caches to be purged, still getting hundreds and thousands of requests.  Spike of takedown requests after they were closed, not before.  Only a number of agents made this mistake.  BAF, large filer, did a bunch.  Totally avoidable error, only avoided by 5 of top 24 reporters. 

Don’t act on false positives; focus on sites worth targeting.  DMCA doesn’t have any notable penalties for false positives—if you can shoot a million arrows and have some land on target, you will. Penalties should ensure that agents take their responsibilities more seriously. Require reporter to attest to the fact that it tested the URLs before requesting takedown.  Other proposals in the paper.  (Where is the paper?)

Daphne Keller, Google Inc. (Free expression at Google)

Overwhelming majority of requests we get are valid and appropriate, but abuse is also real.  There is also error: had a rightsholder ask to take down another company’s SEC filing; another asked for a 19th century translation of the Bible to be taken down. Also have religious organization claiming copyright to remove a video critical of that organization.  Also the Retraction Watch case—someone created copies of the blog on another site, then sent DMCA notices to WordPress to get the real blog taken down.  Very malicious abuse.

The incentive for an intermediary is to cave. Fortunate to work for a company that dedicates itself to having fast processes for valid takedowns and fighting abuse, but a rational economic actor wouldn’t necessarily do that—take it down without any legal inquiry.  One set of researchers posted the text of John Stuart Mill’s On Liberty on various sites, noting its copyright date.  The UK site took down the site without a question; a US site asked for a penalty of perjury statement and the researchers weren’t willing to do that, so it stayed up.  So DMCA yay.  European rule lacks some checks and balances for legitimate speech.  DMCA tells you what counts as notice.  Under eCommerce directive, there’s not such procedural clarity. Counternotice and 512(f) are also very wise.

Elizabeth Valentina, Fox Entertainment Group (manages content litigation here & in US)

Counternotice does exist, mitigating error.  In her personal view, not as excited as process as Keller—doesn’t help minimize piracy. The amount and magnitude of infringing links and corresponding notices makes error no surprise. Fox has received very few counternotices—1 for 6/7 million notices to Google, and it did link to infringing material. It’s not good for anyone to send links that don’t work. Suspects that Seng’s data covers links that did in fact link to infringing content when they were pulled, or even if the site had disconnected access the links survived and content was out there—we’ve seen that with Limewire. We have so many infringing links we don’t want to waste resources on blank links.  NBCUniversal: Google would’ve called them if there was an anomaly.

What happens in an internet minute?  204 million emails, 639,800 GB of IP data, 30 hours of video uploaded to YouTube alone, over 2 million Google searches.  Across all areas of the global internet, taking porn out of the equation, 23.76% of internet traffic was estimated by Envisional (Jan. 2011) to be infringing. Cyberlockers: 40+ million links, P2P: 160 million downloads, 76,000 streaming sites. Infringement notices against cyberlockers: millions sent, only one counternotice. We keep sending notices and the level of piracy doesn’t go down, it goes up.  We can’t stay on top of links, such as links to Life of Pi on rapidgator and uploaded, which increase over time despite our notifications.

The role that search plays: Life of Pi: autosuggest “free online,” which leads to several pirate sites. Lincoln: search result for torrenz.eu comes up, and they’ve received half a million notices per month per Google’s transparency report.  How many reports should be enough?  Search for Argo reveals adjudicated infringer Pirate Bay and adjudicated infringer isoHunt. We don’t see any appreciable difference in the placement of the sites even though Google says it’s taking number of notices into account in ordering search results.

Stanley: the DMCA is a dream for some, not working so well for others.  Google and content owners have sympathetic positions/challenges.  What to do about the whackamole problem?

Keller: Often webmasters don’t know that there’s been a takedown of their search results; you see very different counternotification patterns on YouTube when people know about the removals and have a less scary way through our Content ID system. Counternotice puts the burden on the recipient to act.  Best case on this: CCBill, which talks about the difficulty of getting disjoined pieces of paper and trying to add them up to a DMCA notice and not being sure what the complainant is after. If the complainant asks the intermediary to do the work of figuring out what’s infringing, that’s shifting the burden wrongly to the intermediary. The reason this is a big deal is that it’s a lot of work for whoever has to do it—harder for us, who doesn’t know their movies, to identify their movies. We’ve put work into Content ID and Trusted Content Removal on search, which is the source of the increased takedowns over the past year—swift and efficient processing of lots of URLs all at once.  Took us from 450,000 total removals before the program to removing almost that many every single day. But the person capable of identifying the infringement remains the rightsholder.

The role of search in leading to copyright infringement: we are doing a lot through our DMCA process, but most people looking closely see that search isn’t really the problem these days. Look at the Pirate Bay: only 15% of traffic comes from any search engine. People who want that know how to get it.  They’ll continue to be out there regardless of where search fits into the picture: so target that, stop the flow of money to them, develop alternate services for legal content—Spotify decreased illegal sources by 25% when it arrived in Sweden.

Stanley: what about filtering?

Valentine: Absolutely it helps. We work with companies to improve identification and filtering. There are a number of different spheres in which we’re working, and we aren’t saying any one party is responsible for the entire problem, but we are trying to ID some responsibility. Even if it’s 20% to the Pirate Bay, it’s going to a site you know is infringing and operators have been convicted by Sweden’s highest court and it’s a top search result. There should be responsibility there. Europe: rightsowners can obtain injunctive relief requiring intermediaries to mitigate infringement without a finding of liability. That process might encourage more cooperation, as it has between rights owners and ISPs in Europe. If Google continues to crawl the Pirate Bay every 3 hours, you keep getting those links. If you got 270,000 notices on torrentz.eu, should you go back to crawl that again?  Cooperate w/us to minimize that role.  Filtering has been one place where that happens; it’s not perfect, and lots of claims require review and others are missed, but we’re working on it. Filtering is a good model.

Keller: has heard 3 different ideas of what filtering would mean: searching on “I do not own” and blocking every single result.  Second result “I do not own a cellphone.”  That illustrates overbreadth of keyword filtering. Then there’s the idea that you could block an individual site.  The issue is that if you look at the Transparency Report, even sites with bad reputations, you see at most 5% of their pages targeted/accused. To say that we should assume that the rest of the site is infringing and silence it raises complicated policy issues. If US law were to send the signal that it’s ok to suppress unknown content based on copyright, that sends a signal to countries that would like to do that on other grounds around the world. More interesting filtering: Europe/SABAN cases, efforts to make intermediaries use AudibleMagic or other content detection.  (Valentine: access provider. Keller: and host in another case.)  In three different cases, about SABAN and eBay, the ECJ said those filters weren’t permissible because of interference with people’s right to receive information and privacy concerns.

Stanley: millions of arrows: a tough problem. 

Valentine: it is a lot of work; DMCA imposes duty to terminate repeat infringers; that works and could work here. In the UK we’ve been very successful with having ISPs block access to certain pirate sites, and nothing happened to the internet. Pirate content can go to another site. It’s a no-fault procedure for the ISP and whether the relief requested is proportionate.

Keller: we can all agree that there’s tremendous volume.  Rightsholders/agents resort to algorithms.  Is it/should it be legal for the DMCA notice and the oath and the signature to issue when only a machine has identified infringement.

Valentine: we put great stock in your algorithm that’s automated and like technology; it would be ironic if we should reject the development of tech in this area and have a human being review millions of pieces of content.  (She says she isn’t saying there isn’t human review now—or that there is.)  Google has automated filtering now, with matching process. If there’s any dispute/concern/non-Fox content, these are the parameters that are determined and negotiated with whoever’s implementing the filter (not, that is, with the user posting her remix). Tech makes the first match, and potential disputes go into a queue for review, and she expects most people review those manually.

Seng: data shows otherwise.  Can’t imagine internet agent making 250,000 mistakes over 4 months while claiming that one of its employees owns Microsoft’s copyrights. Not suggesting human agent should be present for every notification, but that more care should be taken.  We are talking about probabilities—Bayesian searches.

Valentine: it depends on the kind of file. You can match hashes on bittorrent, which are half our takedowns.

Seng: yes, but they also use keywords, so review sites of BBC shows get taken down too.

Valentine: even Google says this is the exception. No question that verification varies. But they do more than keyword searches. No interest in taking down others’ content.  Will do lots of verification.

Seng: certainly, but his data have shown that there are agents and there are agents; quality varies.

Stanley: tech can mitigate (or enhance) erroneous notices. No easy answers.

Q: Isn’t Pirate Bay a repeat infringer?  How many notices and why hasn’t been completely removed?

Keller: the DMCA speaks of terminating accounts of repeat infringers; the DMCA doesn’t require us to silence people on the internet.

Q: so your policy is to let this pirate site continue?

Keller: has answered that.

Von Lohman: We’ve worked with Fox on many issues, but some of these claims are misleading. We don’t autocomplete “Argo torrent,” and anyone can look up how the relative frequency of that search has required; the results for Argo alone don’t appear to be infringing as far as he looked (4 pages).  Keyword: when you type “Argo torrent,” if the page doesn’t have torrent in it, you will get results that have those terms in it.  So showing that result is misleading.

Injunctions against intermediaries without showing wrongdoing in Europe: that was introduced in the US; it was called SOPA, and it was roundly rejected. If Fox’s position is SOPA is the right solution, don’t pretend that “Europe” is the right solution and say outright that SOPA is right. Also, in Europe, no court has ever ordered a search engine to remove a site, and if it did so that would raise serious concerns.

Valentine: this is her personal view. Google was sued in France and voluntarily removed results.  (Von Lohman says Google is in litigation in France and didn’t remove results because of the litigation.) SOPA/PIPA were misunderstood—not to shut anything down but to ban foreign pirate sites from receiving US services that facilitated piracy. My reason for showing “Argo torrent” results is that Argo isn’t distributed via torrent.  Thus, Google knows that a movie available as a torrent file wouldn’t be a legitimate copy.  Intended to make the precise point that you know that Argo torrents are available and could do something about them.  (But what?  Remove all search results for “Argo torrent,” now including this webpage?)

Q: AudibleMagic requires expensive licensing.  Could have impact on the survival of a business as well as the innovation of a startup.  Not a great thing.  Google isn’t necessarily doing startups any favors by developing filters that could become standard tech measures that would have to be used by others or they’d lose 512 protection.  How to think about disadvantage to startups?

Keller: Tremendous disadvantage if required. This was part of the SABAN court’s reasoning; a filtering requirement would be great for entrenched incumbents; YouTube spent $30 million on Content ID. We don’t think that Content ID is a standard tech measure; it’s very YouTube specific and can’t be used automagically to filter “the internet”; it depends on being hosted on YouTube and being able to notify various parties.

Valentine: Yes, filtering can be expensive. We’ve had cooperative negotiations with sites on usage rules. If you limit access to a single account holder/password, that might minimize the amount of content that has to go through the filter. Filtering is the cost of doing business if you don’t want to risk infringing/violating the law.

Q: comment on CCI (Center for Copyright Information, 6 strikes/alert system) procedures—will this result in a lot of arbitration?

Valentine: Been in place for a while as an educational program—notices of infringement happening with their account, sometimes informing parents. Mitigation measures are up to the ISPs.  This program has been implemented in France, and P2P piracy went down without corresponding increases in other forms of piracy, like streaming—we think it’s been a successful educational program, supporting people’s interest in paying for content.   We set up an independent review board for assessing a complaint, with due process. Hasn’t been troubling in France; is optimistic here.

Q: is this King Canute trying to hold back the tide?

Stanley: Macbeth said, “I am in blood stepped in so far that should I wade no more, Returning were as tedious as go o'er.” There is no turning back. There are ways to disseminate and consume that have changed; 10 years from now will be different, and she hopes that it will be a compromise.

Keller: thinks the economics will change so that the legal stuff will be easy to buy (but questioner says that he wanted to know whether law will be enforceable).

Valentine: copyright law will continue to be enforceable. Yes, we are providing consumers different ways to access content in the cloud with Ultraviolet.

Seng: looked at comments of people exchanging pirated files on Usenet.  Apparently one internet agent is really good at takedowns on Usenet—20 minutes.  We’ll see higher levels of enforcement.  Market forces will also give reason to see that demand is met, but whether this means that prices will go down is unclear.

No comments:

Post a Comment