As I See It: Old Hephaestus Had A Bot, A.I.A.I.O.
April 28, 2014 Victor Rozek
In 1956, Nathan Rochester approached the Rockefeller Foundation to apply for a princely grant of $7,000. He said he wanted to throw a little shindig at Dartmouth University, where the minds of mathematicians and computer scientists could run free exploring what must have seemed like a fanciful and distant notion at the time–the creation of intelligent machines. He probably would have been dismissed outright, but Rochester was no garden-variety, star-struck futurist. He also happened to be the chief engineer of the IBM 701–the first general purpose, mass-produced computer–and therefore had the requisite gravitas to pacify the normally conservative moneymen. By all accounts the conference was a stirring success, albeit with one huge unintended consequence. Rochester returned to work bursting with exciting news. Unfortunately it was full of implications that frightened IBM’s customers right out of their wingtips. It seems that the conferees had announced, with stunning optimism, that within 20 years “machines will be capable of doing any work a man can do.” It was the first time the term “Artificial Intelligence” (coined at the conference by computer scientist John McCarthy) entered the public consciousness, and it arrived with all the welcome of a foreclosure. Suddenly owning a computer didn’t seem like such a swell idea after all. Orders for the 701 dried up as the threat of displacement became personal. No one wanted to hasten his own demise and end up being supplanted by a bank of blinking lights. The financial impact was grave enough that IBM announced it would suspend further research into Artificial Intelligence, and sent forth its sales team with a carefully crafted message designed to assuage the fears of jittery clients: Not to worry; “computers can only do what we program them to do.” For the next 50 years that bromide became an article of faith among both users and developers. Machines were incapable of independent thought, and bad robots were the stuff of science fiction. Of course that didn’t prevent millions of people from being displaced, but at least unemployment was a byproduct of our design, not the will of the machines. But all of that has changed, according to Jerry Kaplan of Stanford, scientist, futurist, and entrepreneur for all seasons. Kaplan cites three recent developments that are transforming AI. First, there has been a dramatic increase in computing power. Kaplan notes that when IBM’s Watson spanked its human opponents on Jeopardy!, it did so armed with 4 terabytes of memory. That same memory can now be purchased for $150. Next, computers have been outfitted with an assortment of sophisticated sensors that allow them to collect information on–and interact with–the larger world. Collecting data supports decision-making, and the results of those decisions shapes experience from which machines can learn. Which frees computers from the limitations of direct software instructions. If the object is to create a computer that can play chess or drive a car, it has no choice but to learn from experience. You’re only allowed to back through the garage door once. Finally, the Internet gives computers access to the accumulated knowledge of humankind; in other words, a limitless supply of learning materials. That combination of factors, warns Kaplan, portends unprecedented displacement for the workforce. He cites a recent study that predicts 47 percent of today’s jobs will be wholly automated within the next 10 years. And that includes white-collar jobs. The bold predictions of the Dartmouth Conference may, at long last, be coming true. But the ability to learn is a far cry from consciousness, says Kaplan. He makes a distinction between Strong AI and Weak AI. He characterizes Strong AI as the stuff of pixie dust and science fiction, whose worst scenarios depict malevolent machines turning on their makers. Kaplan sees “absolutely no indication” that computers will ever possess consciousness. He is a proponent of so-called Weak AI, which he describes as an engineering approach to solving specific problems like navigation or nuclear fuel rod handling. “The proof,” says Kaplan, “is in the processing.” Nonetheless, Kaplan believes that computers will develop the skill to manipulate us, even without conscious intention. They will study our habits and preferences, and learn to react to our micro-expressions, providing an insight into our experience without the accompanying blame or judgment common to human interactions. Computers might also, for example, discover that nagging will get us to exercise, or that compliments spur us to work harder. And while computer behaviors will not be driven by conscious deliberation, it may be difficult to tell the difference between learning and cognizance. Dutch computer scientist and winner of the Turing Award, Edsger Dijkstra, offered this insightful analogy: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” No matter what we call it or how it is achieved, the function will essentially be the same. The threat, argues Kaplan, will not come directly from the machines, but from our tendency to include them in the circle of humanity. Although learning machines are thought of as contemporary achievements, in fact their education began a year before the Dartmouth Conference. In 1955, another IBMer, Arthur Samuel, wrote what is arguably the first learning program, a remarkable piece of software that played checkers and learned enough to challenge skilled amateurs. But Western fascination with “living” machines dates back to Greek mythology. Hephaestus, son of Zeus and Hera, was the weapons-maker to the gods. He had his own palace on Mt. Olympus, where 20 bellows worked at his bidding tended by automatons he had forged from metal. Bridge across the centuries to Al-Jazari the Turkish inventor and mathematician who, in the 13th century, created a programmable orchestra of mechanical human beings. On to the 17th century when Pascal invented the first digital calculating machine. Then to Mary Shelley who eerily foresaw ethical concerns in creating a sentient Frankenstein. By the 19th century Charles Babbage and Ada Lovelace had combined their genius to create a programmable calculating machine; and a century after that Konrad Zuse climbed on their shoulders to produce the first programmable computers. The dream of conscious machines was alive and well and hurdling headlong into the limitless possibilities of the computer age. Which is how we got from Hephaestus, weapon maker, to U.S. Army, weapon user. Meet Sgt. Star, the chatbot developed by the Army to recruit kids who think war is just another interactive game. It has 835 responses (which are constantly updated) to frequently asked questions, and it answers about 1,550 inquiries a day. According to government documents, this chatbot technology was originally used by authorities to “engage pedophiles and terrorists online.” Charming. But what Sgt. Star lacks in charm he makes up for with guile. Predictably, he’s a little vague about the realities of permanent disability and death. An argument can be made that war is the ultimate expression of artificial intelligence, and you have to question the desirability of a recruit who was convinced to join up by an avatar. But who knows, maybe a new generation of robots will allow the kids to sit out the next conflict. Now, wouldn’t that be intelligent?
|