Scientists have developed a new brain-computer interface that can read a person’s thoughts in real time to correct a robot’s errors, an advance that may lead to safer self driving cars. Most existing brain-computer interface (BCI) require people to train with it and even learn to modulate their thoughts to help the machine understand, researchers said.
By relying on brain signals called “error-related potentials” (ErrPs) that occur automatically when humans make a mistake or spot someone else making one, the new approach allows even complete novices to control a robot with their minds.
This technology developed by researchers at the Boston University and the Massachusetts Institute of Technology (MIT) may offer intuitive and instantaneous ways of communicating with machines, for applications as diverse as supervising factory robots to controlling robotic prostheses.
“When humans and robots work together, you basically have to learn the language of the robot, learn a new way to communicate with it, adapt to its interface,” said Joseph DelPreto, a PhD candidate at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
“In this work, we were interested in seeing how you can have the robot adapt to us rather than the other way around,” he told ‘Live Science’. Researchers collected electroencephalography (EEG) data from volunteers as those individuals watched a humanoid robot decide which of two objects to pick up.
This data was analysed using machine-learning algorithms that can detect ErrPs in just 10 to 30 milliseconds. This means results could be fed back to the robot in real time, allowing it to correct its course midway, researchers said.