- Kerala rains LIVE: Nearly 70 dead as flood bring state to standstill, schools shut today
- Independence Day 2018 HIGHLIGHTS: PM Modi announces healthcare scheme Ayushman Bharat, roll-out on Sept 25
- Real Madrid vs Atletico Madrid Highlights, UEFA Super Cup: Atletico Madrid beat Real Madrid 4-2 in extra time
What is artificial intelligence (AI)? Why is there so much discussion in science and tech circles about it?
Artificial Intelligence comes from computer systems that have been programmed to — or have learnt to — do tasks that would otherwise require human intelligence. Many apps and software are already making mundane work easier by doing a certain part of it for us, based on acquired intelligence. Companies like Uber are working on cars that travel from point to point, negotiating hurdles on the way and taking decisions on their own to ensure the journey is event-less.
While the full evolution of AI can open up a world of incredible possibilities, a fear many scientists have had is of computers beginning to gradually start doing things differently from the way in which a human programmer would do them. The worst nightmare in this scenario is of the creation of a new-age Frankenstein’s monster, a super-intelligent entity that is beyond human control.
And what was the exchange between Elon Musk and Mark Zuckerberg about?
It started when the Facebook CEO was asked his views on AI during a Facebook Live Q&A on July 23. A viewer referred to fears expressed by Musk about AI getting out of hand (details in next question). Zuckerberg rejected Musk’s fears. “I have pretty strong opinions on this. I am optimistic,” he said. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”
Two days later, Musk hit back. “I’ve talked to Mark [Zuckerberg] about this. His understanding of the subject is limited,” he tweeted on July 25. Zuckerberg didn’t respond to this tweet, but wrote a Facebook post congratulating the AI team at his company, and reiterating his faith in the ability of AI to do good.
“One reason I’m so optimistic about AI is that improvements in basic research improve systems across so many different fields — from diagnosing diseases to keep us healthy, to improving self-driving cars to keep us safe, and from showing you better content in News Feed to delivering you more relevant search results,” he wrote. “Every time we improve our AI methods, all of these systems get better. I’m excited about all the progress here and its potential to make the world better.”
So why is Musk apprehensive about AI?
The comment that Zuckerberg got asked about was made on July 15, when Musk told the National Governors’ Association summer meeting at Providence, Rhode Island: “I have exposure to the very cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
Musk has held such views for several years now. In 2014, addressing students at MIT, he had said, “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that… Increasingly, scientists think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” In the same speech, Musk compared AI to “summoning the demon”, and warned we might not be able to control it.
Also in 2014, Musk had tweeted: “We need to be super careful with AI. Potentially more dangerous than nukes.”
In June 2016, he told The Code Conference (where industry influencers have in-depth conversations about the current and future impact of digital technology on our lives), “I don’t love the idea of being a house cat,” adding that the way to avoid that fate was a “neural lace”, a sort of injectible mesh that fits on the human brain and gives it digital computing capabilities. “Creating a neural lace is the thing that really matters for humanity to achieve symbiosis with machines”, he tweeted on June 4, 2016.
In February 2017, Musk reiterated the need for humans to become cyborgs — through a “merger of biological intelligence and machine intelligence” — to keep up with the robots that would soon take away a huge number of jobs. During his July 15 speech, he stressed on enforcing immediate regulation of AI: “We need to be proactive about regulation instead of reactive,” he said. “Because I think by the time we are reactive in AI regulation, it’s too late.” Governments couldn’t afford to wait until “a whole bunch of bad things happen”, because AI represents “a fundamental risk to the existence of civilisation”, Musk said.
Is Musk alone in taking a grim view of AI?
No, Musk isn’t the only one who believes unregulated AI could be a disaster for humanity. In an AMA (Ask Me Anything) session on Reddit earlier this year, Microsoft co-founder Bill Gates said that in a few years, AI would be “strong enough to warrant concern”. Theoretical physicist Stephen Hawking had told the BBC in 2014 that “the development of full artificial intelligence could spell the end of the human race”.
Okay, but didn’t Facebook panic and shut down its AI programme this week?
There have, indeed, been reports about Facebook shutting down one of its AI programmes, apparently because things “went out of control” with two chatbots that had started to talk to each other in a language humans could not understand. The reason Facebook shut the programme, however, was not runaway AI — rather, it was only that the programme could not have brought any benefits to the company.
The programme in question allowed bots (autonomous programmes on a network) to communicate and negotiate with each other, in ways similar to humans. Unveiling the programme in June 2017, Facebook had said that these AI chatbots could create their own sentences, and did not have to stick to a script.
The problem was that the language they created to negotiate made sense to them, but could not be understood by humans — and was, therefore, useless. This is how the conversation between the chat agents went:
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
Finally then, does AI really pose an existential threat to humanity?
It is important to understand that most research is still in the machine learning stage — which entails teaching these deep neural networks (which mimic the human brain) something over and over again, depending on the task. For example, to ensure a programme or machine can recognise images and identify them correctly, the machine learning exercise would involve showing this network millions and millions of images, until it can identify them correctly.
AI is a complex subject; it would be simplistic to look at it as all bad or all good. But robots and AI taking away middle-class, manufacturing jobs in the not-so-distant future is a very real prospect that will have to be addressed by governments sooner than they probably think. Then there’s the question of removing racial/class/gender bias from AI-driven programmes — Microsoft was forced to shut down its chatbot Tay within 16 hours of launching it in March 2016 after Tay quickly learnt, and tweeted offensive material.
On the other hand, research shows AI can help identify diseases much better, faster, and be a boon to medical research. Self-driven cars are already here — in fact, Elon Musk’s Tesla cars have an “Auto-Pilot” mode, which is self-driven. Apple, Google, Microsoft, Facebook, Samsung, all are relying on AI in some form in their products. There is little doubt that sooner or later, discussions about its impact on humanity will move to public, legislative and policy making fora.