Privacy: Failed Model No. 3

This post excerpts my book’s discussion of the third failed privacy model. (If you just can’t wait to see all three, or my idea for a better solution, I’ve posted a free copy of the chapter here.)

If requiring a predicate is the lawyer’s solution; the third failed approach is the bureaucrat’s solution. It is at heart the approach adopted by the European Union: Instead of putting limits on when information may be collected, it sets limits on how the information is used.

The European Union’s data protection principles cover a lot of ground, but their unifying theme is imposing limits on how private data is used. Under those principles, personal data may only be used in ways that are consistent with the purposes for which the data were gathered. Any data that is retained must be relevant to the original purposes and must be stored securely to prevent misuse.

The EU’s negotiating position in the passenger name records conflict was largely derived from this set of principles. The principles also explain Europe’s enthusiasm for a wall between law enforcement and intelligence. If DHS gathered reservation data for the purpose of screening travelers when they cross the border, why should any other agency be given access to the data? This also explains the EU’s insistence on short deadlines for the destruction of PNR data. Once it had been used to screen passengers, it had served the purpose for which it was gathered and should be promptly discarded.

There is a core of sense in this solution. It focuses mainly on the consequences of collecting information, and not on the act of collection. It doesn’t try to insist that information is property. It recognizes that when we give information to others, we usually have an expectation about how it will be used, and as long as the use fits our expectations, we aren’t too fussy about who exactly gets to see it. By concentrating on how personal information is used, this solution may get closer to the core of privacy than one that focuses on how personal information is collected.

It has another advantage, too. In the case of government databases, focusing on use also allows us to acknowledge the overriding importance of some government data systems while still protecting against petty uses of highly personal information.

Call it the deadbeat-dad problem, or call it mission creep, but there’s an uncomfortable pattern to the use of data by governments. Often, personal data must be gathered for a pressing reason—the prevention of crime or terrorism, perhaps, or the administration of a social security system. Then, as time goes on, it becomes attractive to use the data for other, less pressing purposes—collecting child support, perhaps, or enforcing parking tickets. No one would support the gathering of a large personal database simply to collect unpaid parking fines; but “mission creep” can easily carry the database well beyond its original purpose. A limitation on use prevents mission creep, or at least forces a debate about each step in the expansion.

That’s all fine. But in the end, this solution is also flawed.

It, too, is fighting technology, though less obviously than the predicate and property approaches. Data that has already been gathered is easier to use for other purposes. It’s foolish to pretend otherwise. Indeed, developments in information technology in recent years have produced real strides in searching unstructured data or in finding relationships in data without knowing for sure that the data will actually produce anything useful. In short, there are now good reasons to collate data gathered for widely differing purposes, just to see the patterns that emerge.

This new technical capability is hard to square with use limitations or with early destruction of data. For if collating data in the government’s hands could have prevented a successful terrorist attack, no one will congratulate the agency that refused to allow the collation because the data was collected for tax or regulatory purposes, say, and not to catch terrorists.

What’s more, use limitations have caused great harm when applied too aggressively. The conflict with the EU is a reminder that the “wall” between law enforcement and intelligence was at heart a use limitation. It assumed that law enforcement agencies would gather information using their authority, and then would use the information only for law enforcement purposes. Intelligence agencies would do the same. Or so the theory went. But strict enforcement of this use limitation ended up preventing cooperation that might have thwarted the 9/11 attacks.

Like all use limitations, the “wall” between law enforcement sounded reasonable enough in the abstract. While no one could point to a real privacy abuse arising from cooperation between the intelligence and law enforcement agencies in the United States, it was easy to point to the Gestapo and other totalitarian organizations where there had been too much cooperation among agencies.

What was the harm in a little organizational insurance against misuse of personal data, the argument ran. The rules allowed cooperation where that was strictly necessary, and we could count on the agencies to crowd right up to the line in doing their jobs. Or so we thought. In fact, we couldn’t. As the pressure and the risk ratcheted up, agents were discouraged from pushing for greater communication and cooperation across the wall. All the Washington-wise knew that the way to bureaucratic glory and a good press lay in defending privacy. Actually, more to the point, they knew that bad press and bureaucratic disgrace were the likely result if your actions could be characterized as hurting privacy. Congress would hold hearings; appropriators would zero out your office; the second-guessing arms of the Justice Department, from the inspectors general to the Office of Professional Responsibility, would feast on every detail of your misstep. So, what might have been a sensible, modest use restriction preventing the dissemination of information without a good reason became an impermeable barrier.

That’s why the bureaucratic system for protecting privacy so often fails. The use restrictions and related limits are abstract. They make a kind of modest sense, but if they are enforced too strictly, they prevent new uses of information that may be critically important.

And often they are enforced too strictly. You don’t have to tell a bureaucrat twice to withhold information from a rival agency. Lawsuits, bad press, and Congressional investigations all seem to push against a flexible reading of the rules. If a use for information is not identified at the outset, it can be nearly impossible to add the use later, no matter how sensible the change may seem. This leads agencies to try to draft broad uses for the data they collect, which defeats the original point of setting use restrictions.

It’s like wearing someone else’s dress. Over time, use restrictions end up tight where they should be roomy—and loose where they should be tight. No one is left satisfied.

UPDATE: Fixed broken link.

Powered by WordPress. Designed by Woo Themes