I received a complimentary copy of Dan Solove’s entertaining The Future of Reputation: Gossip, Rumor, and Privacy on the Internet in return for a promise to post a public review. This isn’t a summary, just thoughts inspired by the book, which contains a number of thought-provoking examples of reputational harms that are possible on the internet.
The book is about conflicts between privacy, free speech, openness, and control of one’s own information and reputation. Because the internet scales so easily, small pieces of information can get spread to millions of people, with consequences quite different from the ones that ordinarily follow “public” disclosure to, say, ten or twenty people. People pile on, turning even ordinary social sanctions for misbehavior into a virtual pillory, and mockery replaces empathy.
As many reviewers note, the strength of the book is that it acknowledges the paucity of easy answers, but that can also be frustrating.
A side note: Solove classifies LiveJournal as a profile-based social networking site, not a blogging site, which I consider a mistake: LJ is more profile-driven than Blogger, certainly, and the friends list is a key element, but that’s largely because of the content that shows up on one’s friends list – profile alone isn’t all that interesting, and it’s hard to keep friends without routinely providing entertaining content. LJ is actually a fabulous example of a hybrid form of social networking software that is doing interesting things with control over who can see what information, and possibly disturbing things like restricting the ability to search interests on the basis of content – certain “bad” terms can’t be searched on at all.
In any event, Solove identifies the problem of reputation as that scraps of information can be insufficient to judge a person fairly, but we do judge anyway. This is not, as he suggests, a problem specific to people; it exists for products too. It’s a general information processing problem: we can’t possibly use all the relevant information, we are easily distracted by irrelevant information, and we can’t sort relevant from irrelevant very well. False advertising law has a set of tools for dealing with this, but they’re crude, and they only work in limited domains where it’s possible to control the key actors (advertisers). Still, even though it’s hard to imagine an easy translation of advertising regulation to protecting the ability of the Star Wars Kid to have a life that’s defined by other things than Star Wars, it might be worth thinking about how we decide what information about products is so helpful that it must be provided or so unhelpful that it must be suppressed.
Solove advocates, tentatively, moving the essentially absolute immunity of ISPs provided by Section 230 against non-IP tort claims to something more like notice-and-takedown under the DMCA. Given how easily notice and takedown can be abused, and how rarely posters challenge notices (which must seem very high-stakes indeed to nonlawyers), I am unenthusiastic about this idea unless the procedure was made very transparent and the penalties for ISPs were pretty limited.
Solove suggests penalties for abusers of a notice regime, but that only helps if you are willing to fight the abuser in court. Notice can be problematic with copyrighted works, but the rationale for notice is even less compelling with allegedly defamatory or privacy-invasive statements – an ISP can be informed that something is mean, but that really isn’t the same thing as notice that a statement is defamatory, and the valid insight behind section 230 is that ISPs simply can’t investigate truth or falsity claims even when they are on notice of a dispute. (Solove writes of his own experience with an apparently defamatory comment: he suspected it was untrue, but that apparently wasn’t based on any investigation. What standard does he believe should apply?) Recently on one of my discussion lists various suggestions have been made about lifting anonymity as a remedy for abusive speech, with no further sanction except the social consequences; that’s worth exploring.
Alternatively, perhaps a simplified procedure for going to court and getting a declaration of untruth could ramp up the formality on the complainant’s side enough to justify a notice-and-takedown regime. But the DMCA is a weak enough model for copyright; it should not be extended without significant revamping. (I complain about 230, but my main problem is that ISPs shouldn’t get to claim First Amendment speaker status along with immunity from responsibility for any speech they choose to carry. I’d resolve the incongruity by not according ISPs special First Amendment status, not moving significantly away from 230. Or perhaps ISPs should have to choose: either they are not speakers, and have no First Amendment rights of their own and no responsibility for the speech they carry, or they do adopt and endorse the speech they carry, and thus should be subject to secondary liability in appropriate cases.)