New York University School of Law, Platforms and Power Roundtable
Sponsored by the Information Law Institute, NYU School of Law & the Engelberg Center on Innovation Law & Policy, NYU School of Law
Co-Organizers: Barton Beebe, NYU School of Law; Helen Nissenbaum, Department of Media, Culture & Communication, and Department of Computer Science, NYU; Katherine Strandburg, NYU School of Law
Platforms as Fiduciaries
Moderator: Helen Nissenbaum
Values in design: how platforms shape our interactions and transactions, so we need to engage with our nature.
Facilitators: Tarleton Gillespie
Actors and categories: interested in when a term or category begins to emerge/take hold. Will not offer a taxonomy of platforms, but notes that YouTube, flickr, app stores and so on are now including the term in their own self-description. Works in a business sense and seems to take hold in popular vernacular—began to make sense to people. What’s up with that? Language is tricky. The reason why a term like platform, cloud, or portal (back in the day) makes sense in a certain moment because it does a lot of work—speaks to business objectives, legal/political discourse, promises made by companies to users. Opens up a set of services that aren’t computational platforms but are doing something else: offering everything it wants to do as a platform.
Broad swath of content generators—everyone (YT) or not everyone (app store) but still broader than the company’s own employees. Relation between producer/distributor not managed primarily through salary or shared professional norms. Managed largely through promises and expectations of a commercial info service and thin and often disregarded contractual obligations. What obligations are offered to each user? But also ask how these large scale providers who want to offer a vision of platform are not just for individual speech/tools but a platform for the public discourse itself. We hold TV networks to a certain level of public interest; here there’s no limited spectrum, but could think about obligations to the public as well. As a service steps up the role it plays and the role it claims to play, we can ask what that position is and what obligations we’re ready to impose, and that can go beyond contractual promises to the design of public conversation.
Frank Pasquale
Compare: Tim Wu using the electric grid as model—anyone can plug in, you don’t have to give the grid a cut. Battle of metaphors: Christopher Yoo comes back and says the internet is more like FedEx, where if you pay more you can expect to get more. What is the ideal theory of what these things should provide, and how should we regulate with that in mind?
Public options in health care: we could say maybe we can’t expect for-profit companies to provide universal service, but we can intervene and provide that ourselves. But we can’t expect a universal public option. Another health care analogy: some said “let Wal-Mart sell any health plan it wants—if someone wants to pay $10/month and get $100 in benefits, ok.” But that doesn’t work, so instead we have benefit minimums. He’d advocate for that kind of thing too—minimum levels of service/cost caps. Disclosure is also important.
Brett Frischmann
Don’t jump from platform concept to obligations—need to make the connection between the capabilities of the platform and benefit/harm we want to encourage/regulate. Maybe obligations arise because of the use of the term and what it makes people think, but that will depend on the context.
Tal Zarsky
Concerned about ability of platform to manipulate the perspective of the public. (My note: consider YT’s ability to remove various disfavored types of content from its popular lists—they might be on the site, and they might get millions of hits, but someone looking at popular videos would never find them.)
Can transparency be enough? People tend to agree to any terms provided—need to know more about this. The assumption is that there are serious problems of exit, lack of competition, response to signals. Extremely difficult to regulate these very sophisticated entities always 1-2 steps ahead of their regulators. But: Platforms are fearful of public response; fearful of competitors that don’t exist yet. Google is concerned about competitors and thus pays attention to voice—puts pressure on platforms to do what the public wants. Maybe transparency is sufficient.
Patrick Ryan
Open spectrum as an underlying issue. Government has ability to act but hasn’t done much with, e.g., tech enabling sharing of spectrum. Public trust doctrine, as developed in environmental law, might offer a model for enabling access to spectrum, because regulators are not doing anything other than planning to hand exclusive use from one company to another.
Gillespie: compare spectrum as resource (with public trust) to information as resource where the platform has had a role in collecting or encouraging the production of that information—does the generation of a resource justify a claim of public interest?
Danielle Citron: consider the metaphor of a company town. This can tie into the public trust concept. Is that a useful or too blunt of an instrument?
Michael Geist: Regulatory question is different in Canada than the US. “We” can’t always do things that “you” can do and vice versa. Live discussion in Canada: Netflix should pay into the cultural subsidies that broadcasters have long paid. How do we make Netflix do that? On the other hand, Canadian FB users have enhanced privacy protections. What happens when a regulator in another jurisdiction wants to make the platform do something we don’t like?
Cathy Strandburg: Does “platform” make a promise? Says something about one’s relationship to the platform—I’m going to use it as a tool and stand on it, and have choices about what to do. Transparency in privacy policy/ToS is just not there. Maybe we can’t regulate that because of free speech, but there are implicit promises in the discourse.
Pasquale: transparency can make matters so opaque that nobody but experts can understand. Therapeutic disclosure: force use of certain definitions of terms to use, for example, “secure” to describe your services. That can be a way around infinite complexity.
Frischmann: when discussing voice and exit, we have to recognize that there are multiple issues all at once—voice on one issue may not be enough to change if the platform can be confident you won’t really go anywhere. You have to investigate whether there is an effective market for the particular type of concern—not clear that consumer demand for privacy will be enough to lead to socially desirable result given the other things that consumers are balancing.
Zarsky: Two interconnected conversations: definition of platforms. Arguments that email was like a company town were kicked out of court easily. There’s a continuum, but you don’t really live in or get your medication from the virtual world. Are the positive externalities of health the same as online? Given the constraints on government action (budget), we have to give up on something else to regulate here. Second: details of particular platforms. Public discussion on Wikipedia goes on at the top level and then cascades down (I think he means that a few engaged users can advocate enough to get changes even if the majority are indifferent—Nissenbaum says that the question then is how do you get that dynamic activated, which it isn’t now with many issues.)
Greg Lastowka: Portal seems like a thing you move through; intermediary is a person doing transactions between two parties—a person can’t be a platform, which is a space where you actually stay, a physical object. Is cyberspace a place? Platform concept seems to assume the answer, as does the concept of a company town, or the idea that FB is a country and that environmental law provides a model. With increased geo-local info being integrated online, “platform” might reflect increasing awareness that online is space.
Me on the possibilities of regulation of implicit promises given the First Amendment: I’m going to say what I say everywhere: advertising law has a lesson for this! Under consumer protection law, you can’t take back promises you made in the body of the ad in the fine print. Consumer expectations matter no matter what the contract says, though occasionally we do let providers modify those expectations with sufficiently noisy disclosure or when the expectations are sufficiently inchoate (e.g., with respect to the duration of a warranty). We often do this through legislation rather than case-by-case analysis of what the implicit promise is—e.g., Magnusson-Moss Warranty Act. Consider the recent case involving Webloyalty’s now-banned business model of getting credit card info from a second party instead of directly from the consumer. Congress found that this was unfair and misleading to the consumer and that consumers must be required to re-enter their card information to take advantage of the third-party offer. Under the “if you could theoretically understand it, it’s not misleading” theory, then it might well be a violation of the First Amendment to prohibit the Webloyalty business model because Webloyalty’s disclosures are strictly truthful, if you read them all the way through. But if you agree with me that, at a minimum, a factfinder can find Webloyalty’s business model misleading despite the disclosures, then we have a fair amount of freedom to operate at least until the Supreme Court abolishes the commercial speech doctrine.
Gillespie: “Platform” can easily become a red herring—it’s not necessarily the word itself but the relationship/dynamic offered. Platform is definitely a shape metaphor. Has an idea of what that space is—the flatness is an important element, compared to the “through” of portal.
Kerr: what looks like the virtue of a flat space, though, has hidden characteristics that allow control by the “owners” of the platform, who aren’t the ones providing the content of/on the platform. And thus the idea of fiduciary is important—what are the obligations if any of the platform host?
Frischmann: economists use the term “platform” to immediately jump to two-sided networks which assume that the platform can perfectly intermediate if given total freedom. Who gets to define what the term means? Let’s think of platforms contextually, which might lead us to concepts of fiduciaries or co-creators.
Pasquale: Nissenbaum asks ‘what’s the worst case scenario’ for platforms? Zittrain on generativity: worst case might be locked down. Or bias v. objectivity—problem there is that platforms claim First Amendment rights to effect their own preferences with the people allowed to speak on the platforms. This gets back to the company town issue—jurisprudentially there are competing speech claims. His worst-case scenario: self-dealing/harm to competitors.
Ryan Calo: knowing the secret sauce of these platforms poses risks of government intervention: once the gov’t (or the public) knows how it works, political pressure can be exercised to stop unpopular things, which might be censorious.
Pasquale: Stealth marketing: this area of law needs more development.
Calo: but that’s a ex ante rule—you have to disclose connections. Not ex post you have to show how your whole system works.
Zarsky: but dictators are secretive—be aware of political economy. (I’m not sure this is entirely true. Many repressive regimes are public about their repression; China enlists neighbors to monitor each other.)
Oliar: Fiduciary/public trust: means there’s someone on the other side who wants to (or should want to) do you good. Shows that the platform has its own agency/interests. Fiduciaries with lawyer/client—the fiduciary should act in the client’s best interest towards the rest of the world. But platforms have multiple interests—themselves, users, advertisers—they can’t be fiduciaries of everyone. (But lawyers have similar issues; we don’t think they’re fiduciaries of their opponents, or of the owner of the building in which they’re tenants. We might have to think of conflict of interest differently, but I’m not sure this makes the idea of duties to users impossible.)
Salil Mehra: ideas of process—not just transparency, but can you review decisions that affect you? Might be important. Two-sided models of platforms are good for thinking about payment but not so much for thinking about other relationships. Consumer protection law: models we think about come from telecom/payment systems, but where value added is by the user—FB, Wikipedia—the platforms have to respond to heterogeneous perspectives/values/expectations of users.
Gillespie: Process of removing content is the kind of thing that’s hard to understand, especially in code. May be able to discuss that while recognizing heterogenous preferences about content.
Ryan: threat of regulation drives change/improvement maybe more than regulation does. Behavior in the company differs in compliance mode: compliance mode is more frozen.
Pasquale: we tend to trust technical solutions over political ones, but many of these questions, even what is spam, may be values questions which we’d really prefer to pass off to computers but probably shouldn’t. If you want to regulate, you have to think about explaining why the platform is a place for speakers and not (just) a speaker itself. Otherwise the First Amendment will cut off most regulation entirely.
Zarsky: challenge is bridging theoretical definitions and practical solutions that won’t allow players to design around the rules.
Friday, May 06, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment