Freedom of Speech and Information Produced Using Computer Algorithms

Prof. Tim Wu, writing in the New York Times yesterday, argues that the government should generally have a free hand in regulating speech produced by Google, Facebook, Amazon, and others when that speech is the product of “computerized decisions.” This is partly a response to an argument in a Google-commissioned white paper that I cowrote and that Prof. Wu cites, First Amendment Protection for Search Engine Search Results. I want to briefly respond in turn (though noting again that I’m writing here as an advocate and not as an impartial academic).

Prof. Wu begins his op-ed with two questions, which turn out to be the wrong questions:

Do machines speak? If so, do they have a constitutional right to free speech?

Of course, a machine doesn’t have constitutional rights, any more than a movie projector or CD player has constitutional rights. But a machine may communicate the speech of others, and restricting the output of machines may violate those people’s constitutional rights.

The computer algorithms that produce search engine output are written by humans. Humans are the ones who decide how the algorithm should predict the likely usefulness of a Web page to the user. These human editorial judgments are responsible for producing the speech displayed by a search engine. For instance, Google’s use of the volume of links from other sites as a criterion for ranking search results was itself the result of Google engineers’ editorial judgment that inbound links provided a sound and quantifiable measure of a site’s value. Search engine results are thus the speech of the corporation, much as the speech created or selected by corporate news¬paper employees is the speech of the newspaper corporation.

Moreover, the objections to Google’s placement of its thematic search results arise precisely because Google employees are said to have made a con¬scious choice to include those results in a particular place — a choice that critics claim is unfair, much as critics may claim that various editorial judgments by newspapers are unfair. Whatever might or might not be wrong with that decision, it is a decision made by a human. (Prof. Wu’s op-ed notes, in a parenthetical, that its analysis doesn’t apply “[w]here a human does make a specific choice about specific content, the question is different”; but Google’s algorithms in fact reflect specific choices made by engineers about what type of content users will find most useful.)

But beyond this, the First Amendment value of speech also stems from the value of the speech to listeners or readers. See, e.g., First Nat’l Bank of Boston v. Bellotti, 435 U.S. 765 (1978); Virginia State Bd. of Pharmacy v. Virginia Citizens Consumer Council, 425 U.S. 748 (1976); Lamont v. Postmaster General, 381 U.S. 301, 307-08 (1965) (Brennan, J., concurring). Indeed, the process of automating output increases the value of the speech to readers beyond what purely manual decision¬making can provide. When the government restricts output that is generated through a mix of automation and engineering judgment, it is affecting the marketplace of ideas and information as much as when it restricts output that is generated without any automated help.

Let’s think through this using a few simple examples. Say that I put up a Web site that automatically finds and links to various recent news stories about some scandal involving Senator Joe Schmoe or involving Acme Corp. products. And say the government tries to regulate the content of such Web sites, to avoid supposed unfairness to the Schmoes and Acmes of the world — or for that matter to avoid supposed unfairness to news sites that I chose not to include in my automated search.

Any such speech restriction would be unconstitutional, but not because “machines” “have a constitutional right to free speech.” It would be unconstitutional because I have the right to convey information, including through an automated algorithm, and readers have the right to read it, free of governmental interference.

Now say that I want my Web site to be more useful, so I write a program that analyzes the stories and selects the ones that are most relevant — perhaps by removing duplicates, by focusing on the stories that seem to spend more words discussing the subject, by highlighting stories that are linked to by lots of other people, and so on. I’m using some extra automation (created by me) to produce information that’s more useful to users; but that can’t enable the government to restrict the content of my Web site.

Now say that, instead of just criticizing Schmoe or Acme, I set up my program to convey information about whatever person or company the reader wants — information that the program produces based on an algorithm I wrote, though coupled with some extra information that my colleagues at my company choose to provide. That too is just as constitutionally protected as the simple “stories about the Schmoe scandal” Web site. (To be sure, this version of the site is more focused on giving readers what they want and less on conveying my opinion, but while the First Amendment fully protects opinionated commentary, it protects less opinionated commentary as well.)

What are Prof. Wu’s objections to this? One is a metaphor that is colorful but unhelpful: “[T]he fact that a programmer has the First Amendment right to program pretty much anything he likes doesn’t mean his creation is thereby endowed with his constitutional rights. Doctor Frankenstein’s monster could walk and talk, but that didn’t qualify him to vote in the doctor’s place.” Nice turn of phrase, but we don’t need to get to hypothetical artificial intelligences to resolve our problem. (Would Frankenstein’s monster have his own First Amendment rights? Substantive due process rights to marry his bride? A right to keep and bear arms against the farmers’ pitchforks? Unsurprisingly, those questions have not been answered.)

Nor do we need to bring up voting rights, which are much more limited than free speech rights: The New York Times, the ACLU, and the Catholic Church can speak but they can’t vote, and the same is true of noncitizens, minors, and felons. Likewise, the government may bar even ordinary humans from voting in each other’s place, though the First Amendment protects our right to hire people to speak on our behalf.

Instead of focusing on monsters, let’s imagine a more mundane scenario: an artist creates an animatronic sculpture (not from body parts, I hope) that recites some speech. The government can’t restrict what the sculpture is programmed to say — not because the “creation is … endowed with [the artist’s] constitutional rights,” but because the artist is endowed with constitutional rights and the restriction would restrict the artist’s right to communicate (and the listeners’ right to hear). And the same would be true if the artist programmed the sculpture to interactively respond to user questions with answers generated by an algorithm that the artist created.

Prof. Wu’s main other objection is that protecting people’s right to speak using partly computerized algorithms “is a bad idea that threatens the government’s ability to oversee companies and protect consumers.” But the First Amendment itself embodies an idea that often threatens the government’s ability “to oversee” what information is communicated, even when the government is purporting to prevent supposed unfairness. That’s not a bug, as computer programmers say — it’s a feature.

To be sure, some speech is constitutionally unprotected, whether it is communicated directly or using a computerized algorithm that the speaker creates. Conversely, when speech is constitutionally protected, it is protected whether or not it is communicated through a computerized algorithm.

Thus, for instance, Prof. Wu says that “recommendations made by online markets like Amazon could one day serve as a means for disadvantaging competing publishers.” If this “disadvantag[e]” stems from libel — e.g., a marketer falsely claims that some book is plagiarized — then that libel is constitutionally unprotected whether it comes directly from a human or indirectly through a human-written but computer-applied algorithm. (A federal statute provides some extra protection for online publishers that reproduce others’ content, but that too applies whether the reproduction happened purely through a computer algorithm or involved a human choice not to delete a particular submission.)

But humans are often free to directly “disadvantag[e] competing publishers” in other ways. A newspaper, for instance, has a First Amendment right to choose not to give front-page treatment to a competing newspaper’s new project. Likewise, when newspapers chose to publish their own weekly TV listing supplements, that was constitutionally protected even when that disadvantaged TV Guide by offering a competing source for which subscribers didn’t have to pay extra. (Newspapers often did this back when newspaper TV listings were important to readers.) People and organizations are free to do this in the 1900s, when they manually edited traditional information sources. They are equally free to do this in the 2000s, when they create the computerized algorithms that are used to produce modern information sources.

Likewise, Prof. Wu notes that “the ‘decisions’ made by Facebook’s computers may involve widely sharing your private information.” But we already have lots of privacy rules that apply to human beings and organizations — lawyers and doctors, for instance, can’t publicize private information they learn about you. We also have zones in which people are free to reveal much private information about you, for instance if a newspaper investigative reporter uncovers your past misdeeds.

If the law decides that Facebook may not reveal certain private information about you, that decision should apply to direct leaks by individual Facebook employees as well as to computer algorithms that Facebook employees generate. There is no need for some special First Amendment rule for speech that is produced partly using computer algorithms.

So to return to the beginning, of this post and of Prof. Wu’s op-ed: Machines, whether computers, typewriters, or movie projectors lack constitutional rights. But people have constitutional rights, including the rights to communicate using machines.

That’s true if the machine intermediation involves simple transmission, as when a machine delivers this post for you to read. But it’s also true if it involves more sophisticated algorithms that people (such as Google’s engineers) have produced. The government generally lacks the power to control the information published by such human-designed computerized algorithms, just as it lacks the power to control publishing more broadly.

* * *

For more on this, see this post from Julian Sanchez (Cato@Liberty) and this post from Paul Alan Levy (Public Citizen’s Consumer Law & Policy) Blog. (I disagree with some of what is said in the Public Citizen post, for reasons discussed in the white paper, but I think that the post is exactly right in rejecting Prof. Wu’s “Do machines … have a constitutional right to free speech?” approach.)

Note that I’m leaving tomorrow for a family trip to Europe, and likely won’t be able to respond to comments until I return; sorry about that.

Powered by WordPress. Designed by Woo Themes