Saturday, April 30, 2016

Freedom of Expression Scholars Conference at Yale: Search Engines and Free Speech

Heather M. Whitney & Robert Mark Simpson: Search Engines and Free Speech Coverage
 
Discussant: Heidi Kitrosser: Are search engine results covered by the 1A? Larger question of how we decide what’s salient is part of the paper.  Survey relevant 1A case law about algorithms & results.  Of the handful of cases to be decided, courts basically have accepted a rather simple analogy between the algorithms that pop out results and the type of editorial judgments at issue in cases like Tornillo and Hurley.  Issue has mainly come up in unevenly matched cases—pro se or poorly resourced litigants against tech giants.
 
There’s a long temporal and intellectual chain between the initial creation of an algorithm and the spitting out of results—a disconnect in a way that makes it not at all clear that the programmers are the “authors” of the output in the way one puts together a parade/editorial page. Some algorithmically mediated content might be analogous—for example, Stuart Benjamin says, a giant digital billboard calculating the national debt; the calculation is algorithmic but the message overall is expression.  It’s just not always the case that algorithmic results are the product of editorial judgment. Demonstrates weakness of analogical reasoning generally in determining 1A coverage. Point is to get at the core Q: what are the values, reasons, normative concerns that lead us to accord special protection to speech (or other things).
 
Speech/conduct distinction doesn’t resolve the problem; go back to free speech theory for coverage of new tech outputs.  Pluralistic/democratic participation values: Consumer protection laws and antitrust laws might appropriately cover algorithms; algorithms themselves may repress speech if they submerge certain perspectives.
 
Reflections: push them to consider more of a defense of democratic participation as a focus, and/or run through other theories and explain how those would work.  Are you applying theory to determine protection, or to determine coverage?  If you use your theory to determine protection, then aren’t you collapsing coverage and protection?
 
Simpson: More difficult to make decisions here than the courts/some of the scholarship would have you believe. There should be no ready conclusions w/r/t coverage.  Algorithms are neither inherently dissimilar or inherently similar to editorial judgments.
 
Whitney: Analogies by themselves aren’t doing the work because you need to figure out what makes the analogy relevant.  Theory can help figure out if we’re ever going to have limits on the deregulatory turn.  That is a view that one can take of the 1A, but then you have given everything 1A protection (or coverage).
 
Jim Weinstein: Courts have intuitive, unarticulated theories they use when they analogize; good to bring them to the surface.  You suggest possible fairness doctrine, but even if something isn’t covered, the justifications themselves can trigger 1A concerns: chocolate isn’t speech, but if the gov’t wanted to regulate chocolate because consuming it made people more likely to oppose the gov’t, that would be a problem. Gov’t intent to avoid an echo chamber online: same Q.   [Note that this position may imply that federal mortgage insurance is a violation of the First Amendment; the justifications at least included that being a homeowner increases people’s involvement in the community, self-regard, and motivation to work—cf. the more recent discussions of the “ownership society.”]
 
Deven Desai: if the NYT started to use algorithms to take on Google News—would it lose protection?  Facebook, Yelp—is there a search engine difference or not?  TripAdvisor and Yelp v. Google—you can’t get around Red Lion that easily. If there’s not a scarcity, then people can choose something else.
 
Simpson: Algorithms on their own don’t determine anything about coverage.  Making a claim about how courts shouldn’t be thinking about algorithms, not about how they should be.
 
Enrique Armijo: When you accuse Google of inconsistency on net neutrality. The ISP argument is that they want to reserve the right to edit; Google’s position is that they edit all the time, so he thinks there’s a fair difference.  Regulating GM more easily than Google: but what about Target?  When I go to Google, I’m looking for speech, but when I go to Target I’m looking for avocados.  (Hmm. Many times when I go to Google I’m looking for, well, avocado-colored suits at least.)
 
Jim Tsetsis: What about the Press Clause?  If we treated it as having a separate meaning, as the SCt has not, then we wouldn’t have to sweat so hard about the difference b/t GM and the NYT, and could think better about Google. 
 
Whitney: Hurley line is an issue; not traditional Press. Would require quite an intervention from the SCt; would Hurley come out differently even if we separated out the press?  And the search engines would be fine saying that they were like the press, and only use editorial judgment argument as a backup. The issue is still analogical: what makes something a “press.”
 
Robert Corn-Revere: it’s the organization of information, so what about that isn’t protected by the First Amendment?
 
Whitney: The outcome might be coverage/protection; there are coherent accounts that would include search engines, but also coherent accounts that wouldn’t in certain circumstances.  We need a course correction or everything becomes speech and there’s nothing special about a bookstore v. Target.  Result of very expansive theory: The things that people sell, like search results, are the result of expressive decisions; if those choices are expressive/organizing information, then that product is speech—which goes off the rails.
 
Q: Millian harm principle: w/o the 1A, the constraint is rational basis, which doesn’t require the harm principle. One way of understanding the 1A is as demanding something more than the harm principle (a certain kind of harm), or given the baseline of rational basis you could just demand the harm principle be satisfied.  Even if credit ratings are speech, you might have a harm-based justification to override it, in which case the Q about what justifies regulation of search engines would be based on protection, not just on coverage.  Compare the 4th Amendment context: analogies the SCt used fairly easily to justify things—video surveillance is like looking at people—are now at risk of abandonment (cellphone isn’t like other stuff in your pocket).  Disanalogies can also be recognized—leap in scope.
 
Whitney: The 4th Amendment comparison is a good one—resort to principles rather than analogies.
 
Simpson: true, might be covered but not protected.  For our purposes here, want to remain agnostic on the result of a protection decision within the scope of covered speech, because coverage decisions also have important implications for litigation.
 
Balkin: what search engines do more than anything else is serve democratic competence (Meiklejohn)—Meiklejohnian version of 1A would clearly lead to coverage for Google, leaving only protection remaining as the decision.  Many other algorithms would also pass Meikeljohn’s test for coverage.  Only an autonomy theory would say that only humans are bearers of speech, and distinguish between humans and their tools.
 
Simpson: we don’t think it’s as clear as that.  When I go to a search engine, what I think I’m seeing is a purely mechanically generated result.  In cases against search engines, the claim is a consciously gerrymandered result that is not that.  If Meikeljohn’s theory is about members of the demos having access to information they’d need in order to be participators, then the claim is that search engines, at least some of the time, distort exactly that information.
 
Balkin: Meiklejohn would never have said that b/c info you get is distorted, info is not protected. You need access so as to make your own judgments. Lots of information cooks the books. You’d have to argue that search engines have a different relationship to the public different to everyone else’s: information fiduciaries, with special duties to the public.  Grimmelmann, search engines as advisors.  Special duty by nature of service = ability to regulate in public interest; otherwise they’re in the same boat as any other info providers who cook the books (to mix a metaphor).  If a newspaper gives you a bunch of biased headlines, Meiklejohn has no problem with that. Only if an entity had a special duty to the public could it be regulated.
 
Corn-Revere: You’d have to reverse Tornillo for that.
 
Whitney: another possible move is new conceptions of autonomy/libertarian paternalism. Things that distort autonomy should not be unproblematically approved.
 
Balkin: they don’t try to apply nudges to First Amendment values.  Imagine a nudge to register all 18 year olds as Democrats.
 
Whitney: Democratic competence can have multiple meanings: people cannot always detect falsity/misleadingness. 
 
Balkin: but then you’re taking out a huge swatch of 1A doctrine.
 
Whitney: Accept that, though we are not arguing for that here.
 
Simpson: info fiduciary argument is worth pursuing: we’re trying to do more to theorize the special role that search engines have.
 
Sandy Baron: Q of responsibility for output in tort law.  Google doesn’t want to be responsible in that sense; can  you distinguish them in 1A protection/responsibility?
 
Whitney: It does seem there’s a tension for 1A protection for antitrust plus §230 protection because it’s not their speech. They’re neutral intermediary!  100% agree there’s an issue here. Identifying as a speaker is useful in some cases, harmful in others.
 
Q: Facebook isn’t the same as a search engine: trying to be objective/universal, presenting information as relevant. FB is more of a community; very different waters for the tech community.
 
Andrea Matwyshyn: Not everyone would agree w/that. 
 
Q: but FB will remove hate speech/terrorist content.

No comments:

Post a Comment