I’ve just looked at the new proposal for revising the Computer Fraud and Abuse Act (CFAA) offered by Orin Kerr, Jennifer Granick and the EFF. Essentially, they would set a higher threshold for deciding when a hacker has accessed a computer “without authorization,” by requiring that the defendant circumvent a technological barrier that “effectively controls” access.
On first impression, it looks to me like a pretty bad idea, for any number of reasons.
1. First, if this is meant to be “Aaron’s Law,” a cure for the overreaching by federal prosecutors in the Swartz case, it misses the mark. By the time he was through playing cat and mouse with MIT security officers, Swartz was clearly circumventing an effective technological control – unless you define “effective” very strictly, a bad idea I’ll get to in a minute.
2. The EFF proposal doesn’t come from thin air. It’s directly borrowed from the Digital Millennium Copyright Act, which lets copyright holders sue anyone who circumvents technical copy-protection measures, as long as those measures “effectively control access to a work” protected by copyright.
Let’s pause for a moment to consider why the EFF equates the DMCA and the CFAA. On one level it seems obvious. Aaron Swartz was trying to “liberate” IP-protected data in much the same way that, say, the authors of DeCSS sought to “liberate” movies sold from the DVD restrictions set by the studios. EFF wants to borrow the concepts it relied upon in the DMCA cases and use them to defend CFAA cases.
But the CFAA is not primarily about protecting intellectual property. It’s about protecting the security of all kinds of data – national security secrets, privileged client confidences, and the most private personal data stored online by governments, companies, and individuals themselves. Many CFAA defendants, in other words, are egregious privacy violators. So, faced with a choice between protecting privacy and protecting “open data” campaigners, EFF has decided to favor open data over privacy. Put another way, the EFF isn’t comfortable admitting that there’s a threat to privacy except from business and (mainly the U.S.) government. Acknowledging that ordinary people have a stake –- indeed, a privacy stake -– in prosecuting hackers puts too much strain on their worldview. To avoid that strain, though, they’re advocating measures that will actually harm our privacy.
3. That choice might make sense if the only kind of hacking that we faced was a handful of Aaron Swartzes. But in fact, we are in the middle of the worst cybersecurity crisis -– and the most massive loss of data to hackers -– since the dawn of the computer age. It’s an odd time for sophisticated technology observers to propose weakening our computer crime laws. I recognize that the CFAA has problems, and that it’s been rewritten too often by prosecutors eager to ensure that they can win any case they choose to bring, but in light of the massive increase in cyberespionage, I think that Orin, Jennifer, and the EFF should at least show that their proposal will not interfere with prosecution of the hackers who are currently stealing us blind.
4. They don’t make that showing, and I question whether they can. Making criminal liability turn on breach of an “effective technical control” will hamper a lot of legitimate prosecutions, or so it seems to me.
First, I assume that under their proposal the effectiveness of the technical control will be an element of the crime, one that the government must prove beyond a reasonable doubt. That’s a heavy burden, and it makes any ambiguities in the new test very risky for prosecutors.
Second, there’s a problem with the concept of “effective technical control” that resembles the problem of “obviousness” in patent law. Anything that’s been invented starts to look more and more obvious as time goes on. And any technical control that a hacker has successfully bypassed starts to look, well, ineffective.
As I remember, groups like EFF argued in the context of the DMCA that encrypting copyrighted material with a 40-bit key was “ineffective” because the encryption could be broken by brute force cryptanalysis. Do we really want hackers to be able to argue to juries that they can’t be prosecuted for stealing personal data because their victims used too small a key or too obvious a password — or that their security wasn’t proof against Metasploit or some other widely available penetration tool?
5. Do we even want to protect data only if access is effectively controlled by technology?
Let me offer the example of a government database containing passport or drivers’ license data. Because the access that government officials need to this database is not predictable, the system depends principally on a strong audit system to prevent abuse. Everyone who accesses a file is tracked, audited, and monitored for misuse after the fact.
Now suppose that a government worker uses these files to settle some personal score and evades the audit system by stealing the login credentials of another worker. He’s clearly subject to prosecution under today’s CFAA. But under the Kerr-Granick-EFF proposal, the government has to show that its audit system “effectively controls access” to the data. Even putting aside the hindsight problem I described earlier, an audit system doesn’t “control” access in the sense of preventing access; instead, it is designed to identify and punish improper access. It seems to me that even the most abusive access to that data would be hard to prosecute under the revised law.
UPDATE: To reflect Orin Kerr’s reply to this post, in which he makes clear that he does not support the EFF’s“effective control” test.