As I See It: Built-In Disasters
April 6, 2009 Victor Rozek
A $200 million Airbus A340-600 commercial jet sits on the tarmac at the airport in Toulouse, France. The jet is new and has yet to ferry a single passenger. It has been towed to what is called the “run-up area,” where its Abu Dhabi Aircraft Technologies crew is performing an engine test. Having set the air brakes, the pilot instructs the flight computer to slowly increase power to all four engines. With the engines fully engaged and screaming under full lift-off power, a cockpit horn begins blaring, warning of impending takeoff. Annoyed, one of the crew pulls the circuit breaker on the Ground Proximity Sensor to silence the alarm. The reason for what happened next is not completely clear, but here’s an educated guess courtesy of a former Air Force colonel and commercial airline pilot I happen to know. Sensing that the engines were at full power and that the Ground Proximity Sensor was off, the flight control computer either believed the plane had already left the ground, or was preparing for imminent takeoff. In either case, the computer reasoned that the air brakes were improperly set and so, graciously released them. Freed of earthly restraint, the giant airliner rocketed forward like a winged drag racer. What the cockpit crew did in response is also unclear, but whatever it was proved to be too little and far too late. Before it occurred to anyone to throttle back the engines, the plane plowed nose-first into a blast barrier. It was a classic contest between the irresistible force and the immovable object, and the blast barrier won. The runaway Airbus climbed the concave barrier and smashed through the top of the wall where the front 60 feet of the fuselage broke off. The plane came to rest straddling the barrier, engines still whining like headless banshees. To be sure, there was plenty of blame to go around. Several accounts of the incident indicated that blocks were not used to secure the wheels, and that no one had even bothered to read the run-up manuals. Still, if the computer made the fateful decision, you have to ask yourself: Isn’t the function of computer control to prevent human error rather than enable it? The problem, according to my personal aviation consultant, is that the Airbus was designed for third world markets where pilots that have no Air Force experience are likely to receive less training than their first world counterparts. Another inducement, I suspect, is if the plane pretty much “flies itself,” companies can eject high-priced veteran pilots and replace them with low-priced newbies. The objective of fly-by-wire aviation (having the computer fly the plane) is to compensate for any shortcomings that pilots may harbor. And in the abstract, it makes sense–as long as nothing goes wrong. And why should it? After all, the supremacy of computers is all but unquestioned. But like many broadly held beliefs, this one is seasoned with a generous dose of blind faith. We hold fast to the illusion that when computers are put in charge, the computer’s superior judgment and instant reactions are being substituted for the ponderous human equivalents. But where calculation speed is not of the essence, the computer itself plays a diminished role. Computers have no heart or courage; no capacity for inspiration. As of yet, machines are not imbued with consciousness, and may never be. Their judgment is formulaic, not organic. In the case of the Airbus, an argument can be made that the use of computers actually elevates the judgment of the software designers and programmers over that of pilots. Since system software cannot possibly predict every contingency in every context, it may, in unusual circumstances, contribute to the very disaster it was designed to avert. The US Airways flight that recently made that spectacularly unexpected landing in the Hudson River was also a fly-by-wire Airbus. To briefly recount: Shortly after takeoff, the plane encountered a flock of geese that were sucked into the engines causing them to fail. With no time to undertake an engine restart procedure, the imperturbable Captain Sullenberger calmly told controllers the plane had no thrust and was descending rapidly. Sullenberger quickly discerned that a return to LaGuardia Airport or an emergency landing at Teterboro Airport in New Jersey were impossible, and selected the only flat surface available: the Hudson. But would that have been necessary if the pilot had direct control of the throttles? My pilot friend says, perhaps not. In a fly-by-wire aircraft, the captain does not have direct control of the throttles. When he wants more thrust, his action is first communicated to the computer, which relays his command to sensors in the engine. But if those sensors were damaged, as they undoubtedly were, and the engines were losing power, the computer might interpret that condition as the will of the pilot. Or, unable to communicate with the engine, the computer might simply do nothing. In American-made planes (especially older ones), the pilot has direct control of the throttles. In response to the accident, he would, according to my friend, push them to the max and perhaps be able to maintain enough thrust to reach Teterboro. Of course, that is purely conjecture, but the overriding issue is whether it is wise to take the decision-making away from the human and give it to the machine. That fateful day, Sullenberger, a former fighter pilot and veteran commercial aviator, was flying with a rookie. Had he not taken immediate control of the aircraft, one wonders if the less experienced pilot would have pulled off the miracle on the Hudson. The computer surely would not have. Computers are making more and more decisions for us–and about us. They decide if we qualify for house insurance, healthcare, and credit. They decide how much we must pay for these services. They are used to track our preferences, and to spy on our private communications. And they certainly play an enabling role in the way we will live in the future. From stem cell research and cloning, to nanotechnology, alternative energy development, and the expanded uses of cyberspace–computers are the common denominator. And although the application of computer technology ranges from annoying and illegal, to beneficial and miraculous, that’s still an order of magnitude away from allowing silicon and software to make life-and-death decisions. Nowhere are the issues of surrendering to computer control more problematic than in the military, which is investing heavily in remote warfare. Deadly drones are routinely maneuvered by pilots who sit in air-conditioned comfort seven thousand miles away from the combat zone. Computer control has made killing into shift work; and after a hard day of blowing things and people up, the drone pilot can return home to the loving embrace of his family. It’s bizarrely sanitary, but at least a human being is making life-and-death choices on our behalf, and that human can be held accountable. Soon, however, research will produce robots and drones that can operate independently; autonomous systems that will decide when, where, and who to kill. How these robots will differentiate combatants from non-combatants is uncertain. What is predictable is that robots will make it easier to wage war and that their successful use will create an international robot arms race. And because well-intended technologies often enable ill-intended people, all too soon robots will become the preferred weapon of terrorists supplanting the self-canceling suicide bomber. And when these systems fail as they surely will, and “accidentally” kill hordes of innocents who we euphemistically dismiss as “collateral damage,” there will be no one to blame. It will be a recall problem, guaranteed to be fixed in the next software release. Contrast that moral ambiguity with Captain Sullenberger, who having made the landing of the century without the loss of a single life still reported agonizing for days about whether he had done everything that could have been done. Call me old-fashioned, but I’d rather put my faith in Sullenberger.
|