Saturday, May 07, 2011

NYU platforms conference part 3

Platforms as Regulators

Moderator: Florencia Marotta-Wurgler

Facilitators: Danielle Citron
Wants to develop the harm of online hate speech including its impact on participation by targeted groups. Paper: Urge intermediaries to be more transparent about how they define and enforce hate speech. Urge intermediaries to be cautious because their spaces are spaces of civic engagement.

Increasingly seeing on mainstream sites: hate against individuals and groups. FB: “Kill a Jew Day.” Told them “you know what to do,” and showed Nazi paraphernalia; thousands of members. Similarly YT had videos on how to kill [slurs]. Hate speech undermines civic engagement and intimidates people to go offline. Children are particularly vulnerable to this. Law only covers true threats and intentional infliction of emotional distress. But intermediaries/platforms are not state actors and aren’t bound by the First Amendment. Some sites don’t police hate speech and even encourage it, but many mainstream intermediaries do prohibit it in the ToS, often with very vague terms.

Counterspeech is difficult at this scale. Whack-a-mole is also a problem. Platforms ought to target hate sites aimed at children. Similar to Google’s explanation of the results for “Jew.” Hate site: martinlutherking.org is a rabidly racist site that spreads falsehoods about MLK pitched to children. Will come up on the first page of a search, but that’s a terrific opportunity for counterspeech through ads or other responses. How do we marshal communities to respond? (I recall Yochai Benkler talking a lot about kuro5hin; these days there are a lot of rate comments up/down.)

Sean Flynn
Why was it the right thing for Google to do to post the counterspeech? Is it because Google is a private entity with rights to editorialize, or because Google is in fact a quasi-public entity with a duty to combat hate speech? Citron is using more of the public mode. Intermediary has a more private connotation, but platform, carrier, commons, FB as a country, electric grid, etc. connote public entities. What are the implications? Consumer protection model: enforce consumer expectations about the platform. Tort model: hate speech might fit into this. Public entity model: the problems are supposed to be solved with participation/voice—work on Wikipedia’s due process dispute resolution procedures fits into this.

Could give legal benefits for acting in the public model: noncommercial entities might get special copyright treatment. Transparency/disclosure—allowing consumers to exit (with their data, I assume)—might also be regulated to require a certain kind of due process. Public service obligations: floors for services, ceilings for cost, universal access norms. Those might fit into a category of platforms-as-public.

Private side: policy accepts the private editorial role. The more platforms filter content and editorialize, the more we might subject them to the rules for a TV broadcaster with an editorial role. Intermediary liability/tailored safe harbors would come in there.

Michael Geist
Wikileaks: note use of financial platforms to try to starve Wikileaks. Credit card companies ceased accepting donations; PayPal cut off donations as well. Interesting question: whether done at US government’s request. Companies drop lines of business with threat of regulation.

If a commercial site offers potentially infringing activities—zediva, which streams DVDs—imagine that the copyright owners want financial intermediaries to stop taking payments for the site. This is quite powerful as a threat. So are attempts to remove the site/make it inaccessible—Amazon’s cloud dropped Wikileaks very quickly. Easier to deal with because there are a lot of cloud providers out there, and others were able to resist DDOS attacks.

.xxx: at a platform level, countries are discussing blocking that top-level extension entirely. Another form of regulation: channeling (as if people won’t be able to get porn). Platforms are obvious sources of regulation from the government, which raises important questions of transparency and due process.

Ian Kerr
Distinguishing platforms from people: same idea that’s behind “code is law.” But the so called technological platform and the ethical/legal/social is complex and interwoven.

He participated in a discussion of lawyers on FB—law firms want to be able to know everything a candidate has ever done on FB, as if that were a matter of national security. His role: say shame on you. 3 different law students showed him waivers for summer jobs, in each case asking for waivers allowing massive disclosures, asking for usernames and password data in some circumstances. At least you can change your password after that; one version of the waiver was to allow a third-party provider to scrub FB data, including stuff from years past. In one instance it was a private sector firm, and the other 2 were government organizations. Response from people in the insurance industry: of course we’re not interested in a broad ability to cull information; that would be counter to valuable uses we want to make. The next day, a story broke in Canada about a woman who’d posted pictures of herself on vacation in Mexico and was unilaterally denied short term disability funds because they found the pictures on FB. She had carefully set her privacy settings to friends only for everything; somebody working for the insurer found a way to become her friend. The pictures were in the paper because she sued.

So there are a lot of relationships between platform and regulator. In Canada and the US, we see this arising in private litigation on any subject: is there anything on FB people can pull up to aid them? Approach adopted in a series of Canadian cases has in essence been that the rules of civil procedure in most jurisdictions allow discovery of any document deemed relevant. Linchpin: courts have generally decided that FB should be understood as a document. As soon as you understand FB as a document, all the attention on the platform as surveillance becomes very different.

Ira Rubinstein
Privacy issues: emphasis on do not track is a platform debate. Also important not to conclude from this that privacy issues are limited to platforms. FTC approach: let industry try self-regulation first. This approach fails to seize a moment when Congress seems poised to enact baseline privacy regulation, which is necessary to create principles that allow us to regulate platforms for privacy.

Tension between regulation and self-regulation: threat of regulation isn’t enough to get a baseline. Carrots and sticks: private right of action for privacy violations unless a firm is covered by a safe harbor. That’s the kind of law that could keep platforms honest in developing self-regulatory codes, which they’re already experimenting with.

Glenn Brown (Twitter): Since these aren’t squarely legal questions, there’s no external guide for making a judgment. YT would have lawyers and nonlawyers engaging in a rulemaking process, sort of like a court of appeals, trying to set precedent; interesting area for research. For Citron: How could you get platforms to have training in rulemaking/jurisprudence? Saddam Hussein execution cellphone videos ended up on YT: lawyers and policy people had to decide what to do. Difficulty of dealing with a userbase that is so large and so sophisticated at finding the line of how much sexual content and how much copyrighted content is allowed.

Citron: there’s a yearning for ALI-type principles. That would be helpful, but if you have a strong sense of the harms you want to avoid, you can manage that in a more principled way.

Nissenbaum: to what extent can users negotiate with the platform? TrackMeNot—a lot of discussion about whether this was legal for users to change the terms of interaction. Platforms protect themselves by ToS; question is whether any kind of term the platform wants to erect is okay.

Kerr: EULAs are the rule; the only time that’s tweaked is when users backlash sufficiently. When UOttawa students launched a privacy complaint against FB and FB agreed to change the settings, that was briefly a victory but FB turned that to its advantage by setting the defaults in a way that it knew that 92% of users would never change. Computer programmers are the authors of their own universe. If there’s no legal power over that, then it’s tough to do anything other than hope for backlash.

Geist: Not optimistic about user backlash. FB says that 30% objection by users would mean FB would revisit a ToS change; that’s ridiculous—200 million would have to actively object. FB has been responsive to regulators in Canada, and to the prospect of US legislators looking under the hood. Increased the level of encryption when Tunisia was using an ISP to examine everything going back and forth.

Gillespie: Distinguish between platforms that want to regulate, platforms that feel pressure from governments to regulate, and platforms that are forced by law to regulate. Related to whether/how we think of platforms as public actors. The public extreme: they shouldn’t do anything they’re not required by law to do. The private extreme: they should do anything they want. We need to think about parallels—public street, shopping mall—to figure out where we want them to stand, especially since the people actually making the decisions to pull content are not going to be trained in jurisprudence. Note that entities cutting ties to Wikileaks used the justification “we don’t support illegal content” well in advance of any determination that what Wikileaks did was illegal.

Frischmann: Worthwhile to distinguish between layers—very uncomfortable with asking network connection ISPs to make any judgments about what to carry, less uncomfortable with FB.

Citron: Agreed. Think of censorship by proxy or surveillance by proxy as important and different question from providers making decisions on their own. Regulation should aim not at treating them as public actors but at ensuring transparency.

Flynn: consider the international dimension. In copyright: developed countries’ attempts to lower costs of enforcement and raise the penalties for infringement until perfect enforcement is achieved. Effects on developing countries: big problems for access. Most educational texts cost the same in poor countries as in rich; students can’t afford them, and thus they go to online sites that share those texts. If you have perfect enforcement with international agreement that every intermediary liable for any infringing content, you could take down entire ISPs. This has huge consequences for poorer countries.

Gillespie: If we ask platforms to regulate on behalf of a law (which we do), they either succeed or fail. We make them regulate for child pornography. But then we get into discretion—should YT remove the Hussein videos, or should Apple remove alternative medicine/religion apps? There’s no choice but to make a decision: either the content is allowed or it isn’t. Once we start to shape whether you can access Wikileaks, every choice is consequential; every choice is a regulation by the platform.

Strandburg: Suppose FB decided to be totally transparent about which pictures were allowed—administrative rulemaking with comments allowed. Should I be happy with that? In the government context, that’s not sufficient for First Amendment-protected speech—it’s still protected even if there’s a procedurally fair vote to suppress it. When does an entity become large enough to say that it can’t pick its mechanism of content curation? It’s easy to agree with Citron that some content shouldn’t be on FB, but who decides?

Geist: we do have evidence of what transparency in following legal requirements could look like: the DMCA, and then chillingeffects.org helps make it transparent for Google and other contributors; chillingeffects.org has also recorded when some hate speech came down in Canada. Twitter disclosed requests for info about Wikileaks; how many other entities have received such requests and not said anything?

Citron: Notes that FB has ways of notifying you content has been taken down, at least for some time.

Gillespie: FB also has Draw a Muslim day, invisible to people in Pakistan. Does it solve the problem to create an archive that is invisible to you? It’s there for many other people. Tweaking the archive for different users seems clean but may be very problematic: if you believe the content should/shouldn’t be there, then having it there and not there at the same time is not satisfying.

Flynn: Once you start editing, you look more like you should be liable in the way that other content deciders are for things like defamation/copyright infringement. It’s hard to filter the hate speech and not be required to filter the copyright infringing material. (Absent some governing legislation, which we have in the US though it’s the other way around.)

Rubinstein: if users can influence the platform, disclosure may be satisfactory. For something like privacy, disclosure is a failed model. Have to specify the regulatory goals.

No comments: