I recently read Popular Mechanics’ riveting article reconstructing the last minutes Air France 447, which in 2009 disappeared without explanation over the Atlantic between Rio and Paris. Using the cockpit transcript, the article reveals that the pilots essentially flew a fully functioning passenger jet into the sea. Why? It appears that a temporary loss of flight speed data and then the disconnection of autopilot systems panicked a copilot into lifting the nose of the plane. He then more or less kept the stick pulled all the way back as the plane lost forward speed and plunged into the ocean, paying no attention to dozens of blared stall warnings. Here’s a bit of the transcript and Popular Mechanics’ commentary:
02:10:55 (Robert) Putain!
Damn it!
Another of the pitot tubes begins to function once more. The cockpit’s avionics are now all functioning normally. The flight crew has all the information that they need to fly safely, and all the systems are fully functional. The problems that occur from this point forward are entirely due to human error.
02:11:03 (Bonin) Je suis en TOGA, hein?
I’m in TOGA, huh?
Bonin’s statement here offers a crucial window onto his reasoning. TOGA is an acronym for Take Off, Go Around. When a plane is taking off or aborting a landing—”going around”—it must gain both speed and altitude as efficiently as possible. At this critical phase of flight, pilots are trained to increase engine speed to the TOGA level and raise the nose to a certain pitch angle.
Clearly, here Bonin is trying to achieve the same effect: He wants to increase speed and to climb away from danger. But he is not at sea level; he is in the far thinner air of 37,500 feet. The engines generate less thrust here, and the wings generate less lift. Raising the nose to a certain angle of pitch does not result in the same angle of climb, but far less. Indeed, it can—and will—result in a descent.
While Bonin’s behavior is irrational, it is not inexplicable. Intense psychological stress tends to shut down the part of the brain responsible for innovative, creative thought. Instead, we tend to revert to the familiar and the well-rehearsed. Though pilots are required to practice hand-flying their aircraft during all phases of flight as part of recurrent training, in their daily routine they do most of their hand-flying at low altitude—while taking off, landing, and maneuvering. It’s not surprising, then, that amid the frightening disorientation of the thunderstorm, Bonin reverted to flying the plane as if it had been close to the ground, even though this response was totally ill-suited to the situation.
The article offers a final observation on what things were like in that cockpit, minutes from the crash:
Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.
That all sounds right. But like everything else these days, it made me think about cyberwar. Some of the most effective tactics used by our adversaries have a social engineering component. That is, they know how humans react to certain situations and take advantage of that reaction to gain control of our computers. They know we’re likely to open messages and click on links sent by superiors in our organization. They know we will accept friend requests from people who are already connected to a lot of our friends. Stuxnet took advantage of social engineering of a sort by making sure that the systems reported normal activity to the humans in the control center while sending abnormal requests to the machines. The humans believed what their controls told them.
What does this have to do with the crash of AF447? The reaction of the AF447 pilots was tragically human. Once we lose faith in computer systems, especially in an emergency, all of us are likely to ask, “What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on?” And if we have only minutes to make a decision, we’re likely to lock on a fragment of our training and keep trying it. The evidence that we’re failing disastrously just makes us pull harder on the stick.
So: Why can’t that reaction be engineered? Put another way, could a hacker have caused the AF447 crash, not by directly overriding the pilots but by manipulating their very human reactions? I should stress that I don’t believe a hacker did that. Quite the reverse. I’m asking whether future cyberattacks will try to manipulate the human beings behind the computers.
On reflection, the answer is obvious. All of war is an effort to manipulate the opponent into a different, defeated frame of mind. But the logical conclusions are pretty troubling. Even as we begin to deploy automated defenses against remote sabotage, attackers will turn to social engineering to defeat them. Once again, this gives the offense far more options than the defense.
Thus, imagine that we decide to improve our cyberdefenses by redesigning critical military or civilian systems so that computers alone cannot cause catastrophic missteps. That’s good, but it simply challenges the attacker to find a way to influence not just the computers but also the humans – to panic the humans into a catastrophic misstep. Even if the attacker can’t fly our planes into the sea, maybe he can get our pilots to do it for him. Even if he can’t cross the air gap to bring down our nuclear plants, he might be able to fake an emergency in the operations center that leads to the same outcome.
As AF447 shows, the key to such an attack is to create doubts about what is true in a situation where decisions must be made in minutes. Then, as AF447 shows, humans revert to muscle memory and to training, which in some cases can lead rather predictably to disaster.
We’re already seeing rudimentary social engineering in cyberattacks. We need to get ready for something a lot more sophisticated.