Over the last year — and intensifying over the last few weeks — there have been sensational news report of artificial intelligence bots running wild and asserting their independence. In December 2016, a Microsoft bot learned to swear like a sailor and subscribed to racist ideas that would have appalled Sir Norman Tebbit, the British conservative who wanted to judge immigrants by which team they cheered at the Oval. Under a little prodding from users, chatbots embedded in a Chinese messenger service denounced the Communist Party and had to be taken off the air. And, most infamously, bargaining bots in a Facebook lab developed a pidgin which suited their needs better than English, defeating the purpose of the experiment.
There’s a fine paradox here: the machine can do no right. If they become un-human, like the Facebook bots Bob and Alice, they are useless for their task, which is to interact with humans. If they become too human, and learn to swear and swagger from their peer group — the humans they interact with — that’s also a problem, and perhaps a bigger one. Bad language and defiance are behaviours routinely learned by children. It’s part of growing up, and the price is paid by parents, who are summoned to the headmaster’s office or the local police station for a discreet little chat. But the corporate ‘parents’ of bots, which do not enjoy the same social protections as human children, have reason to be nervous — they would face financially eviscerating litigation if their little ones ran amok.
This paradox reflects a change in the direction taken by the machine age. It got off the ground — quite literally — by creating machines which were decisively not human. Human flight became possible when pioneers stopped flapping artificial wings like Icarus and graduated to fixed-wing gliders like Otto Lilienthal’s. After that, powered flight was only a matter of installing an engine. From the Wright Brothers’ Flyer to supersonic stealth bombers, nothing much has changed except avionics, materials engineering and aerodynamics.
Henry’s hammer, immortalised in left-wing elegies for the labour class, was nothing like the beam engine which supplanted it to launch the age of capital. Viscount Jethro Tull’s automatic seed drill, which launched the agricultural revolution in Britain, had no pretensions to humanity, nor did the tractor which served as the prime mover of John Steinbeck’s The Grapes of Wrath, by displacing farm labour from Oklahoma. Nor did the first “intelligent” machines, conceptualised by Charles Babbage and programmed by Ada Lovelace. But at that point, the trail was in search of a new direction. Appropriately, the most enduring programming language for artificial intelligence is named Ada.
The birth of artificial intelligence was the turning point. The humanoid robot was no longer the stuff of Asimov and the Jetsons. It was in the offing, and humans across the board developed a love-hate relationship with the idea. In the introduction to his new book Deep Thought, chess champion Garry Kasparov recalls a simul he played in Hamburg against 32 computers and won 32-0. Twenty years later, he faced IBM’s Deep Blue supercomputer, and wondered about the outcome of the Hamburg game if the tables had been turned. What if 32 humans faced an intelligent machine? It might have the capacity to beat all of them, but the real challenge would be to get it to move from table to table.
Like all board games, chess has become a long-distance sport, and you don’t have to go places to play it. Humans pit themselves on electronic boards against machines thousands of miles away. They also play other humans on email. Humanoid perambulation is not a required skill. The chess computer is not anthropomorphic, either in aspect or playing style, and yet Kasparov’s contest with Deep Blue was billed as a death match between man and machine. It was well known that supercomputers excelled only in clearly defined roles, like playing chess or breaking ciphers. In real-world situations, they were no more intelligent than caterpillars. And yet, the idea of them besting humans in that single role was scary.
The idea of machines exhibiting independence of mind in a real situation, like a conversation, is even scarier. But while the press and the public believed that the Facebook bots who developed a private language were shut down because their makers feared a rebellion, it was actually because humans could not follow them very well, defeating the purpose. The ability to develop a language was not itself a problem. In fact, earlier this year, Elon Musk’s OpenAI project had conceived an experiment in which bots were supposed to create languages to communicate with each other.
Robots are popularly perceived to be the metal men which are put through their paces at technology expos and polyester-clad women sold as sex dolls, but the machines with a useful and immediate future are bots, running autonomously on the internet, within our phones and embedded in voice-controlled devices like home managers, surgical instruments and even call centres. They are not humanoid except in their communications, and are visible only in the form of avatars. When Star Wars depicted the useless but entertaining C3PIO as humanoid and the technically proficient R2D2 as mechanical, they were prescient.