Crash: how computers are setting us up for disaster
Tim Harford has an interesting long read in the Guardian that shows how relying on computers has a downside. The article begins by describing the final moments in the cockpit of Air France Flight 447 before it crashed into the ocean. It then continues:
. . . And then an alarm sounded. The autopilot had disconnected. An airspeed sensor on the plane had iced over and stopped functioning – not a major problem, but one that required the pilots to take control. But something else happened at the same time and for the same reason: the fly-by-wire system downgraded itself to a mode that gave the pilot less help and more latitude to control the plane. Lacking an airspeed sensor, the plane was unable to babysit Bonin.
The first consequence was almost immediate: the plane began rocking right and left, and Bonin overcorrected with sharp jerks on the stick. And then Bonin made a simple mistake: he pulled back on his control stick and the plane started to climb steeply.
As the nose of the aircraft rose and it started to lose speed, the automated voice barked out in English: “STALL STALL STALL.” Despite the warning, Bonin kept pulling back on the stick, and in the black skies above the Atlantic the plane climbed at an astonishing rate of 7,000 feet a minute. But the plane’s air speed was evaporating; it would soon begin to slide down through the storm and towards the water, 37,500 feet below. Had either Bonin or Robert realised what was happening, they could have fixed the problem, at least in its early stages. But they did not. Why?
The source of the problem was the system that had done so much to keep A330s safe for 15 years, across millions of miles of flying: the fly-by-wire. Or more precisely, the problem was not fly-by-wire, but the fact that the pilots had grown to rely on it. Bonin was suffering from a problem called mode confusion. Perhaps he did not realise that the plane had switched to the alternate mode that would provide him with far less assistance. Perhaps he knew the plane had switched modes, but did not fully understand the implication: that his plane would now let him stall. That is the most plausible reason Bonin and Robert ignored the alarm – they assumed this was the plane’s way of telling them that it was intervening to prevent a stall. In short, Bonin stalled the aircraft because in his gut he felt it was impossible to stall the aircraft.
Aggravating this confusion was Bonin’s lack of experience in flying a plane without computer assistance. While he had spent many hours in the cockpit of the A330, most of those hours had been spent monitoring and adjusting the plane’s computers rather than directly flying the aircraft. And of the tiny number of hours spent manually flying the plane, almost all would have been spent taking off or landing. No wonder he felt so helpless at the controls.
The Air France pilots “were hideously incompetent”, wrote William Langewiesche, in his Vanity Fair article. And he thinks he knows why. Langewiesche argued that the pilots simply were not used to flying their own aeroplane at altitude without the help of the computer. Even the experienced Captain Dubois was rusty: of the 346 hours he had been at the controls of a plane during the past six months, only four were in manual control, and even then he had had the help of the full fly-by-wire system. All three pilots had been denied the ability to practise their skills, because the plane was usually the one doing the flying.
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”
The paradox of automation, then, has three strands to it. First, . . .