I’m still working my way through all the FISA court material that was declassified today, and acquiring a new appreciation for how hard a journalist’s job can be. But I’ve gotten far enough to start worrying, seriously, about the role we’ve given to the FISA court and what it does to the court and NSA.
There’s an old saying that megalomania is an occupational hazard for district court judges. While Chief Judge Walton’s opinion doesn’t quite succumb to megalomania, there is a distinct lack of perspective in his approach that makes me wonder whether the FISA job slowly distorts a judge’s perspective in unhealthy ways.
That was certainly true of Judge Lamberth, who spent most of 2001 persecuting a well-regarded FBI agent for not observing the “wall” between law enforcement and intelligence. That’s the wall that the court of appeals found to be utterly without a basis in law but that Chief Judge Lamberth nonetheless enforced with an iron hand. Judge Lamberth forced FISA applicants to swear an oath that they were observing the wall, a tactic that allowed him to sanction the applicants for misrepresentation if they didn’t live up to his expectations. He was so aggressive in this pursuit that he had sidelined the most effective FBI counterterrorism teams in August of 2001. The bureau knew by then that al Qaeda had terrorists in the United States but it couldn’t use its best assets to find them them because Judge Lamberth had made it clear that he was willing to wreck their careers if they breached the wall.
I fear that Chief Judge Walton is going down the same road — that the FISA court is the only agency of government not humbled by its failures on the road to 9/11 and is therefore the only agency that will repeat those failures. My concerns are best illustrated by the court’s opinion of March 2, 2009, about which I offer three thoughts:
1. In much covered language, the judge claims that the government engaged in “misrepresentations” to the court. This is one of the three alleged misrepresentations mentioned by Chief Judge Bates in an opinion released last month. Since that opinion was released, commentators have widely assumed that NSA has been lying to the court. Because, frankly, that’s what “misrepresentation” usually means. But the other filings declassified today show pretty persuasively that there was no intentional misrepresentation. Here’s what seems to have happened, in brief. Back in 2006, scrambling to write procedures for the metadata program, a lawyer in NSA’s Office of General Counsel wrote in a draft filing that a certain dataset of phone numbers always met the “reasonable articulable suspicion” standard. Turns out that that wasn’t true; only some of the numbers did. The lawyer circulated his draft for comment, suggesting that he wasn’t absolutely sure of his facts, but no one flagged the error, which turned out to be surprisingly difficult to verify. From then on, NSA and Justice simply copied the original error, over and over, all of their submissions. A mistake for sure. But a “material misrepresentation”? Only to a judge with a very warped view of the world, and the NSA.
2. How about the other headline-grabbing statement in the opinion, that the government’s position “strained credulity”? Here, I think the court is on even shakier ground. The debate is about the court’s minimization order, which declared that “any search or analysis of the [phone metadata] archive” must adhere to certain procedures. NSA dutifully imposed those procedures on analysts’ ability to search or analyze the archive. The problem arose not from giving analysts access to the archive but from some pre-processing NSA performed as the data was flowing into the archive.
If I’m reading the filings properly (and I confess to some uncertainty on this point), NSA keeps an “alert” list of terror-related phone numbers of interest to individual analysts. Since new data shows up at NSA every day, the agency has automated the job of scanning to find those numbers as they show up in the agency’s daily take. The numbers on the alert list are compared to the day’s incoming intercept data, and each analyst gets a report telling him how many times “his” numbers appear in which databases.
This alert list was run against data bound for the telephone metadata along with all the other incoming data. The difference was that an analyst who got a “hit” on that database couldn’t access it without jumping through the hoops already set up by the FISA court — reasonable articulable suspicion, special procedures, etc. This must have seemed quite reasonable to the techies at NSA. They knew what it meant for an analyst to “access” the database, and an automated scanning system that yielded only pointers was not the same as giving an analyst access. In the end NSA’s office of general counsel came to the same conclusion: the court’s orders regulated actual archive access, not scanning against a list for statistics and pointers.
But that’s not how Chief Judge Walton saw it. He held that it “strained credulity” to say that alert list scanning was different from “accessing” the archive. Maybe he just didn’t understand the technology (the opinion offers some reason to think that). Or maybe he just thought about the question like a judge, always alert to slippery slopes and unintended consequences: “If you can lawfully search this data without limit before the data gets into the archive, you will make meaningless all the limits I’ve set. Why would you think I’d let you undermine my order in so transparent a way?”
Unfortunately, Judge Walton wasn’t thinking like a techie. The techies who implemented the court’s order thought they’d been told to restrict access to the database, and they did. They weren’t told to restrict the use of statistical tools that scanned incoming data automatically, so they didn’t. They certainly didn’t believe they were undermining the court’s order. Quite the contrary, they had designed the system to make sure that the alert list was just a starting point. Analysts who learned they had a hit in the database couldn’t get any further information without meeting the FISA court’s “reasonable articulable suspicion” requirement.
It’s hard not to see this as a misunderstanding, perhaps exacerbated by the difference between legal and technical cultures. But that’s not how Judge Walton sees it. His opinion dismisses the possibility that this could possibly be a good-faith misunderstanding. It’s an outrage, he fumes, and efforts to explain it “strain credulity.” Frankly, if anything strains credulity in this case, it’s that line in the opinion.
3. The chief judge is so sure there’s evil afoot that he calls for briefing on “whether the Court should take action regarding persons responsible for any misrepresentations to the Court or violations of its Orders, either through its contempt powers or by referral to appropriate investigative agencies.” For anyone steeped in the disaster caused by Chief Judge Lamberth’s witch-hunt for violators of the wall, this is tragically familiar ground. It’s almost exactly how the FISA court drove the wall deep into the FBI.
I’m sure we’ll be told by the press that this opinion brings to light another scandal and an agency out of control. But that’s not how I see it. It looks to me as though NSA was doing its best to implement a set of legal concepts in a remarkably complex network. All complex systems have bugs, and sometimes you only find them when they fail. NSA found a bug and reported it, thinking that it was one more thing to fix. Then the roof fell in.
The interesting question is why it fell in. I think a fair-minded judge encountering the issue for the first time in the courtroom would not likely say that NSA’s interpretations were disingenous or the result of bad faith or misrepresentation. Yet Judge Walton went there from the start.
I suspect that it’s because we’ve unfairly given FISA judges a role akin to a school desegregation master — more administrator than judge. Instead of resolving a setpiece dispute and moving on, FISA judges are dragged into a long series of linked encounters with the agency. In ordinary litigation, the judges misunderstand things all the time and reach decisions anyway, and they rarely discover all that they’ve misunderstood. The repetitive nature of the FISA court’s contacts with the agency mean that they’re always discovering that they only half understood things the last time around. It’s only human to put the blame for that on somebody else. And so the judges’ tempers get shorter and shorter, the presumption of agency good faith gets more and more frayed. Meanwhile, judges who are used to adulation, or at least respect, from the outside world, keep reading in the press that they are mere “rubber stamps” who should show some spine already. Sooner or later, it all comes together in a classic district judge meltdown, with sanctions, harsh words, and bad law all around.
If I’m right about the all too human frailties that beset the FISA court, building yet more quasijudicial, quasimanagerial oversight structures is precisely the wrong prescription. We’ll be forcing judges to expand into a role they are utterly unsuited for and we’ll put at risk our ability to actually collect intelligence. In fact, the more adversarial and court-like we make the system, the more weird and disorienting it will become for the judges, who will surely understand that at bottom they are being asked to be managers, not judges.
The further we go down the road, the more likely we are to turn FISA into the Uncanny Valley of Article III.
UPDATE: Typo correction: not instead of now. Thanks Raffaela!