COMPUTERS have entered the age when they are able to learn from their own mistakes,a development that is about to turn the digital world on its head.
The first commercial version of the new computer chip is scheduled to be released this year. Not only can it automate tasks that now require painstaking programming for example,moving a robots arm smoothly and efficiently but it can also sidestep and even tolerate errors,potentially making the term computer crash obsolete.
The new computing approach,already in use by some large technology companies,is based on the biological nervous system,specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task,and adjust what they do based on the changing signals.
In coming years,the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see,speak,listen,navigate,manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition,navigation and planning,which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can walk and drive in the physical world,although a thinking or conscious computer,a staple of science fiction,is still far off.
Were moving from engineering computing systems to something that has many of the characteristics of biological computing, said Larry Smarr,an astrophysicist who directs the California Institute for Telecommunications and Information Technology.
Conventional computers are limited by what they have been programmed to do. Computer vision systems,for example,only recognise objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe,a set of step-by-step instructions to perform a calculation.
But last year,Google researchers were able to get a machine-learning algorithm,known as a neural network,to perform an identification task without supervision. It scanned a database of 10 million images,and in doing so,trained itself to recognise cats.
In June,the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
The new approach,used in hardware and software,is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen,a computer scientist who leads Stanfords Brains in Silicon research programme,said that that was also its limitation,as scientists are far from fully understanding how brains function.
Until now,the design of computers was dictated by ideas originated by the physicist John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed,following instructions programmed using long strings of 1s and 0s. They store that information separately in what is known,colloquially,as memory,either in the processor itself,in adjacent storage chips or in higher capacity magnetic disk drives.
The data for instance,temperatures for a climate model or letters for word processing are shuttled in and out of the processors short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.
The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements,they are known as neuromorphic processors,a term credited to the California Institute of Technology physicist Carver Mead,who pioneered the concept in the late 1980s.
They are not programmed. Rather the connections are weighted according to correlations in data that the processor has already learned. Those weights are then altered as data flows in to the chip,causing them to spike. That generates a signal that travels to other components and changes the neural network,in essence programming the next actions.
Instead of bringing data to computation,we can now bring computation to data, said Dr Dharmendra Modha,an IBM computer scientist who leads the companys cognitive computing research. The new computers,which are still based on silicon chips,will not replace todays computers but will augment them,for now.
IBM and Qualcomm,as well as the Stanford research team,have already designed neuromorphic processors,and Qualcomm has said that it is coming out this year with a commercial version.
That reflects the zeitgeist, said Terry Sejnowski,a computational neuroscientist at the Salk Institute,who pioneered early biologically inspired algorithms. Everyone knows there is something big happening,and theyre trying to find out what it is.