Legal Responses to "Cyber-Bullying":
The Yale Law Journal Pocket Part recently posted an interesting Call for Papers in response to the recent news stories about the XO board and anonymous Internet speech:
  The Yale Law Journal Pocket Part is soliciting essays and commentaries on the role of law, policy, and extralegal tactics in regulating instances of cyber bullying, including defamatory "Google bombing." How, if at all, should regulatory schemes address providers of information who make no endorsement of the information's content?
  I'm no expert in either the relevant law or the relevant technology. But here's my amateurish idea: Would it help to somehow link up provider immunity with search robot exclusion? Under current law, site owners are immune from liability for the speech of others under 47 U.S.C. 230. This means that a site owner can allow anonymous comments, announce that anything goes, and then sit back and watch as the trolls engage in all sorts of foul play. Search engine robots then pick up the foul play, resulting in harm weeks or months later when a third party googles that person or event. A lot of people may be harmed, but the law can't stop it: the provider is immune and the commenters are anonymous.

  If I'm not mistaken, though, the same provider who is immune under Section 230 also controls the scope of the resulting harm. Why? Because, at least as I understand it, the same provider controls whether search engine robots are permitted to come to the site and collect the information in the first place. I believe that blocking search engine robots is pretty easy, or at least could be configured to be easy; it just requires a line of htmlcoding.

  Where does that take us? Well, it suggests to me that we might consider conditioning legal immunity on disabling search robots. Providers would be immune for liabililty relating to particular content only if they had taken technical measures to block search engine robots from collecting that content. So if you wanted to host a free-for-all for others and be immune from liability, you could do that: you would just have to keep the resulting content from being fed into Google. On the other hand, if you wanted Google to pick up the content, for whatever reason, you would need to assume the risk of liability for that content you're letting Google collect.

  What kind of impact would such a rule have? I imagine it would lead a lot of providers to block Google and other search engines from collecting materials from message boards, blog comment threads, and the like. The unmoderated and anonymous comments would still be out there; they just wouldn't be found using search engines.

  Anyway, that's my idea. I may be way way off, either as a matter of law or technology; I'm not sure that it's so easy to diable the robots, and I'm not sure it would be easy to amend Section 230 to condition immunity on doing so. But I figured I would throw out the idea and get your thoughts.