RATs and Poison II — The Legal Case for Counterhacking

In an earlier post, I made the policy case for counterhacking, and specifically for exploiting security weaknesses in the Remote Access Tools, or RATs, that hackers use to exploit computer networks.

Poisoning an attacker’s RAT is a good idea for at least three reasons.  First, we can make sure the RAT doesn’t work or that it actually tells us what the attackers are doing on our networks.  Second, compromising the RAT can give us access to the command and control machines that serve as waystations that let attackers download stolen data or upload new malware.  Finally, if we’re very lucky and very good, we can use the poisoned RAT to compromise the attacker’s home machine, directly identifying him and his organization.

The legal case for counterhacking is more problematic, thanks to long-standing opposition from the Justice Department’s Computer Crime and Intellectual Property Section, or CCIPS.  Here’s what CCIPS says in its Justice Department manual on computer crime:

Although it may be tempting to do so (especially if the attack is ongoing),
the company should not take any offensive measures on its own, such as
“hacking back” into the attacker’s computer—even if such measures could in
theory be characterized as “defensive.” Doing so may be illegal, regardless of
the motive. Further, as most attacks are launched from compromised systems
of unwitting third parties, “hacking back” can damage the system of another
innocent party

Rat poison sign

This passage is a mix of law and policy. I’ve already explained why the Justice Department’s policy objections — the risk of damage to innocent parties’ systems — are out of date.

That leaves the law. Does the Computer Fraud and Abuse Act, or CFAA, foreclose counterhacking?  In fact, the weasel words of the manual — hacking back “may be illegal,” it says, so victims “should not” do it — are a clue that the law is at best ambiguous.

Really, ambiguity is the heart of the CFAA. To oversimplify a bit, violations of the CFAA depend on “authorization.”  If you have authorization, it’s nearly impossible to violate the CFAA, no matter what you do to a computer. If you don’t, it’s nearly impossible to avoid violating the CFAA.

But the CFAA doesn’t define “authorization.”  It’s clear enough that things I do on my own computer or network are authorized. That means that the first step in poisoning a RAT is lawful.  You are “authorized” under the CFAA to modify any code on your network, even if it was installed by a hacker.  (Let’s put aside copyright issues; they generally don’t enter into CFAA “authorization” analysis, and it’s unlikely in any event that a hacker could enforce intellectual property rights against his victim.)

A harder question is whether you’re “authorized” to hack into the attacker’s machine to extract information about him and to trace your files.  As far as I know, that question has never been litigated, and Congress’s silence on the meaning of “authorization” allows both sides to make very different arguments.  The attacker might say, “I have title to this computer; no one else has a right to look at its contents. Therefore you accessed it without authorization.” And the victim could say, “Are you kidding?  It may be your computer but it’s my data, and I have a right to follow and retrieve stolen data wherever the thief takes it.  Your computer is both a criminal tool and evidence of your crime, so any authorization conveyed by your title must take a back seat to mine.”

In a civil suit, Congress’s decision to leave undefined the central concept of the statute would make both of those arguments plausible.  Maybe “authorization” under the CFAA is determined solely by title; and maybe it incorporates all the constraints that law and policy put on property rights in other contexts. Personally, I dislike statutory interpretations that fly in the face of good policy, so I think the counterhacker wins that argument.

But that hardly matters; computer hackers won’t be bringing many lawsuits against their victims.  The real question is whether victims can be criminally prosecuted for breaking into their attacker’s machine.

And here the answer is surely not.

Even if you could find a federal prosecutor wacky enough to bring such a case (and after the Lori Drew case, I’m afraid that’s not unthinkable), the ambiguity of the statute makes a successful prosecution nearly impossible; deeply ambiguous criminal laws like this are construed in favor of the defendant.See, e.g.,  McBoyle v. United States, 283 U.S. 25, 27 (1931) (“[I]t is reasonable that a fair warning should be given to the world, in language that the common world will understand, of what the law intends to do if a certain line is passed. To make the warning fair, so far as possible, the line should be clear.”) (Holmes, J.).

Much the same analysis applies even to the hardest case, where victims use a compromised RAT to access command and control machines that turn out to be owned by an innocent third party.  An innocent third party is a more appealing witness, but his machine was already compromised by hackers before the counterhacking victim came along, and it was being used as an instrumentality of crime, sufficient in some states to justify its forfeiture. It remains true that the counterhacker is pursuing his own property.

Finally, when he begins his counterhack, the victim does not know whether the intermediate machine is controlled by an attacker or by an innocent third party. Why should the law presume that it is owned by an innocent party — or force the victim to make that presumption, on pain of criminal liability? (There’s room for empirical research here; while a few years ago hackers seemed to favor compromising third-party machines for command and control, the Luckycat study suggests that some attackers now prefer to use machines and domains that they control.  As the latter approach grows more common, a presumption that intermediate machines are owned by innocent third parties will grow even more artificial.)

All told, it seems reasonable to let victims counterhack a command and control machine that is exfiltrating information from the victim’s network, at least enough to determine who is in control of the machine, to identify other victims being harmed by the machine, and to follow the attacker back to his origin (or at least his next hop) if the intermediate machine is simply another victim. Requiring the victim not to counterhack if there’s uncertainty about the innocence of the machine’s owner simply gives an immunity to attackers.

The balance of equities thus seem to me to favor a recognition of the victim’s authorization to conduct at least limited surveillance of a machine that is, after all, directly involved in a violation of the victim’s rights.  If “authorization” under the CFAA really boils down to a balancing of moral and social rights, and nothing in the law refutes that view, then the counterhacker has at least enough moral and social right on his side to make a criminal prosecution problematic — unless he damages the third party’s machine, in which case all bets are off.

 

Powered by WordPress. Designed by Woo Themes