On Sunday, after a hat-trick of wins, Google’s AlphaGo programme surrendered a game of Go to a human, South Korea’s Lee Se-dol. The news made waves in Korea, which read it as a reaffirmation of the anthropocentric universe. That’s because Korea is mad about Go, with 24-hour TV channels covering tournaments non-stop. At first, the significance of the series of human-computer Go games in Seoul was not appreciated in the rest of the world. Computers have repeatedly defeated chess grandmasters, right, so what’s the big deal?
The deal is actually remarkably big. Chess is a relatively limited game, and computers win because they can visualise the whole decision tree of a game from opening gambit to checkmate, while humans are more likely to lose the plot. Go is the world’s most statistically complex game, with millions of possible outcomes. It is useful to think fuzzily about 20 moves ahead, not to the end of the game, and that gives humans an edge over machines, which are traditionally exact. In short, a programme is unlikely to defeat a human. But AlphaGo, a project of Google’s artificial intelligence company DeepMind, won three games straight. Was it, Asimov forbid, thinking like a human?
But it’s just a game, right? It is, and weirdly enough, many aspects of human behaviour can be modelled as maximisation games. The possibilities for deep, human-like intelligence in autonomous connected devices are both amazing and fearsome. Autonomous devices are deployed in a wide range of formats, from the home thermostat to complex weapons systems. The possibility that machines can think like humans, and better than us, raises humankind’s second-oldest anxiety after the fear of death: The rise of the machines. For now, it’s only a game, and the machine has won 3:1. Humans can be sporting about it.