Opinion Car trouble
But the Tesla crash does not mean curtains for driverless passenger transport
The spectacular crash of a Tesla car on autopilot does not ring the death knell for the driverless car, the Holy Grail several information technology and automobile companies are competing for. Neither the driver nor the machine could see the white truck which they rammed, against a bright background. Both were blind to it. The debate on the ethics of the human-machine interface is ongoing, and before the rules of the game develop, it would be disconcerting, arguably, to find oneself in a car cleverer than oneself. Like, a car that tells you, “No, we are not going to the movies tonight. Instead, we are going to Popocetapetl. Belt up, zip your lip and enjoy the ride.”
Last week, a creepy machine which appeared to have agency gave Americans the jitters. This was the police robot fully loaded with C4 which tooled up to the Dallas shooter and shut him down. While this was no different from a drone attack, the general sentiment was that a tin can has no business attacking a human, even if another human is at the can’s controls.
That’s clearly coming from Asimov’s entirely fictional laws of robotics. Real rules of engagement, specifying where the machine gets off and under what circumstances that specification can be overridden, will evolve along with artificial intelligence. These can be pretty complex. Such as, what is a driverless car to do if it sees a truck driven by a lunatic mowing down people? Should it intervene? Driverless cars now do pretty basic stuff, like keeping to their lane and keeping their distance. As they encounter more complex situations, they should be tested in laboratory simulations, not in real world situations where they can do real damage.