Tuesday, July 02, 2024

NetChoice and Calvinball: Initial thoughts

I understand if you don’t think that the First Amendment is an area where SCOTUS is really doing “law” as we were taught it, but as a distraction for myself I have been thinking about (1) the idea that facial challenges are strongly disfavored and (2) the idea that content-based speech restrictions are presumptively unconstitutional. (2) might well be on its way out anyway, and I think (1) will speed its demise, or at least make (2) (hereinafter Reed) something of a dead letter.

Reed is probably still good law as to sign restrictions like those in Reed, which were content-based on their face and applied only to content. But I will note that the rule in Reed prohibited the display of outdoor signs without a permit, but exempted 23 categories of signs from that requirement. It seems like (with severability) the exemptions are content-based, but the permit requirement itself isn’t.

Beyond Reed, what remains? Consider a law that bars speech that creates a public disturbance. Presumably this is not facially content-based, because a public disturbance can come from volume alone, regardless of content. However, if applied to speech that creates a public disturbance because of its content, I would guess that the application has to survive strict scrutiny. And doctrines of vagueness and overbreadth, and their concern with chilling protected speech—if we still care about that and not just about chilling vigorous presidential action—might also bear on the validity of the law, which could therefore still face a facial challenge.

The narrow tailoring inquiry of strict scrutiny (or the reasonable tailoring inquiry of intermediate scrutiny) therefore might come at a different point: when we’re comparing the permissible applications of the law (overly loud noise) to the impermissible ones (speech that creates a public disturbance because of its content). According to NetChoice, we are now supposed to figure out if there are too many impermissible instances compared to permissible ones. How? Tailoring suggests itself as an answer.

But it’s not the only answer. Consider the following hypothetical: Free Speech Junction provides evidence that, under its public disturbance ordinance, it has issued 50 tickets for noise-based violations and 1 ticket for content-based violations. Does that mean that the impermissible applications are substantially outweighed by the permissible applications, such that this is a facially valid ordinance? Its neighboring polity, Nosy Neighborhood, issued 100 tickets for content-based violations and 20 tickets for noise-based violations during the same period. Is its identical ordinance facially invalid? Or should we look at the state-level or national data to figure out how to “count” permissible applications?

Some of the Court’s discussion in NetChoice seems to suggest that we should count services, or even subservices—if the law as applied to Uber or Gmail isn’t content-based, that is a strike against facial invalidity. One of the reasons this suggestion seems strange on its face is that it’s obvious that Texas and Florida didn’t have any interest in covering Uber—such an approach seems to reward overexpansive laws. Maybe here we should give extra weight to what the legislature thought it was doing, since we know that its key aim was impermissible per the Court majority.

Instead of counting services, which seems a lot like counting number of citations, perhaps we could count functions. The Court notes several differences among (probably) covered platforms. But even there we face some problems: do Uber and Etsy perform the same function (selling off-site goods or services) or different functions (selling rides and selling tangible and intangible goods)? Is Discord providing a chat service, a UGC feed, or something else? Do Discord and Reddit do the same things for First Amendment purposes? (Disclosure—I submitted an amicus for Discord—cited by one of the bad concurrences, yay.)

Maybe we could do the same conceptual cut as I did for the public disturbance law: content moderation done for expressive purposes versus content moderation done for nonexpressive purposes. Stated that way—or, even worse, stated the way the legislatures did, “censorship” done for expressive purposes versus “censorship” done for nonexpressive purposes—it’s hard to imagine how the latter might dominate enough to save the law. Perhaps Uber and Etsy do remove a bunch of content for nonexpressive reasons, but even their removals are often going to be because of pure content (Uber drivers or riders who engage in racial slurs, for example, or Etsy merchandise that promotes Holocaust denial).

I’m not optimistic that courts will have a good grasp on this. As I said on Bluesky, people who can imagine that there exist “feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards” (n. 5) probably shouldn't be making internet policy. Even AO3 and Wikipedia remove stuff! And they do so in the service of ideologies that are far more centrally held than any commercial service’s. They just don’t then apply a weighting algorithm for displaying what they guess will keep the user happier/more engaged.

Anyway, given the conceptual counting difficulties, tailoring might seem like a good source for comparisons: the law is facially invalid if a substantial number of its applications are content-based and don’t survive strict scrutiny, and a more narrowly tailored law would get rid of most of those invalid applications. But the majority doesn’t mention tailoring, only comparison of valid to invalid applications, which gives courts maximum flexibility to do what they will. And that, of course, is the true lesson of this Term.

1 comment:

Mark Lemley said...

I also think the claim is hard to square practically with the rule against prior restraints. If the answer is "wait and see how it is applied to you," we aren't especially disfavoring prior restraints.