Over the past two days of discussion about my article, I have essentially been saying that we (policymakers, lawyers, law professors, computer security experts) do a lousy job calculating the risks posed by Superusers. This sounds a lot like what is said elsewhere, for example involving the risks of global warming, the safety of nuclear power plants, or the dangers of genetically modified foods. But there is a significant, important difference: researchers who study these other risks rigorously analyze data. In fact, their focus on numbers and probabilities and the average person's seeming disregard for statistics is a central mystery pursued by many legal scholars who study risk, such as Cass Sunstein in his book, Laws of Fear.
In stark contrast, experts in the field of computer crime and computer security are seemingly uninterested in probabilities. Computer experts rarely assess a risk of online harm as anything but, "significant," and they almost never compare different categories of harm for relative risk. Why do these experts seem so willing to abdicate the important risk-calculating role played by their counterparts in other fields? Consider four explanations:
1. Pervasive Secrecy. Online risks are shrouded in secrecy. Software developers use trade secrecy laws and compiled code to keep details from the public. Computer hackers dwell in a shadowy underground. Security consultants are bound contractually not to reveal the identities of those who hire them. Law enforcement agencies refuse to divulge statistics about the number, type, and extent of their investigations and resist Congressional attempts to increase public reporting.
Which brings us to California SB 1386. Inspired by experiences with this law, Adam Shostack argued at this year's Shmoocon that "Security Breaches are Good for You," by which he really meant, "breach disclosure is good for you," setting off a mini-debate in a couple of blogs. (See this post and work backwards from there). On his blog, Adam said:
The reason that breaches are so important is is that they provide us with an objective and hard to manipulate data set which we can use to look at the world. It's a basis for evidence in computer security. Breaches offer a unique and new opportunity to study what really goes wrong. They allow us to move beyond purely qualitative arguments about how bad things are, or why they are bad, and add quantifatication.
I think Adam is on to something, and this quote echoes some of my conclusions in the article. But I'm not hitching my argument directly to his. Because even if you conclude that Adam is wrong; if you think the need for secrecy and non-disclosure trumps his desire for a more scientific approach to computer security, secrecy still shouldn't trump accurate, informed policymaking (lawmaking, judging). What does this mean? If someone wants to keep the details behind a particular risk secret, for whatever reason, perhaps that's his prerogative. But if he then complains to policymakers about vague, anecdotal, shrouded risks, he should be ignored or at least his opinion should be greatly discounted.
2. Everyone is an Expert. "Computer expert" is a title too easily obtained. Unlike modern medical science, where the signal advances require money and years of formal education to achieve, many computer breakthroughs tend to come from self-taught tinkerers. In many ways, the democratizing nature of online expertise is cause for celebration; it is part of what makes Internet innovation and entrepreneurship so exciting.
The problem is that so-called computer experts tend to have neither the training nor inclination to approach problems statistically and empirically. People can be called before Congress to testify about identity theft or network security, even if they have no idea nor even care how often these risks occur. Their presence on a speakers' list crowds out the few who are thinking about these things empirically and robustly.
3. Self-Interest. Many experts have a self-interest in portraying online actors as sophisticated hackers capable of awesome power. Law enforcement officials spin yarns about legions of expert hackers to gain new criminal laws, surveillance powers, and resources. The media enjoy high ratings and ad revenue reporting on online risks. Security vendors will sell more units in a world of unbridled power.
4. The Need for Interdisciplinary Work. Finally, too many experts consider online risk assessment to be somebody else's concern. Computer security experts often conclude simply that all computer software is flawed, and that malicious attackers can and will exploit those flaws if they are sufficiently motivated. The question isn't a technology question at all, they contend, but it is about means, motive, and opportunity, which are questions for criminologists, not engineers.
Criminologists, for their part, spend little time studying computer crime, perhaps assuming that vulnerability-exploit models can only be analyzed using computer science. The answer, of course, is that they're both wrong -- and both right. Assessing an online risk requires an interdisciplinary blend of computer science, psychology and sociology; short-sighted analyses that focus only on some of these disciplines often result in misanalysis.
One Prescription: Better Data. I won't spend too much time summarizing my prescriptions. The gist is that we need to start to police our rhetoric, and we need to do a better job collecting and using data. Two sources of data seem especially promising: the studies coming out of the burgeoning Economics of Information Security discipline, and the ongoing National Computer Security Survey co-sponsored by DOJ's Bureau of Justice Statistics and DHS's National Cyber Security Division and administered by RAND.
There is much more to my arguments and prescriptions, but I hope this is a good sample. Tomorrow, I will transition to something very different: a two-day look at a paper I have co-authored describing some empirical results about the Analog Hole and about consumer willingness-to-pay for digital music.
Related Posts (on one page):
- The Myth of the Superuser, Part Three, The Failure of Expertise:
- The Myth of the Superuser, Part Two, Harm:
- The Myth of the Superuser, Part One: