“AI is not going to dominate us or take over humanity, it’s going to amplify human intelligence,” said Yann LeCun, vice president and chief AI scientist, at Meta’s first Build with AI Summit held in Bengaluru on Wednesday.
The AI scientist believes using AI assistants is like having a team of intelligent people at one’s disposal, and said that these assistants are often smarter than the users. “AI assistance might eventually be smarter than us, but we shouldn’t feel threatened by that and should feel empowered,” LeCun said.
Talking about India’s role in shaping a future with AI, LeCun said, “India has an important role to play, not just in AI technology development for local products but also for the international market.”
Meta AI, powered by the open-source AI model Llama, with over 500 million monthly active users worldwide, is projected to become the most widely used AI assistant by the end of 2024, with India leading as its largest market. The assistant is available in English and Hindi on platforms like WhatsApp, Instagram, and Facebook.
“Open source is not just important today, it is going to become even more important in the future, where AI is going to become a common infrastructure that all of us across the world will share,” Lecun said.
Sandhya Devanathan, vice president, Meta India, announced the launch of AI Studio in India, the first country to get access to it other than the United States of America. AI Studio lets users create Llama-powered AI avatars––often seen as digital representations of themselves––based on their interests.
Despite these advancements, Meta’s chief AI scientist said that even the largest language models (LLMs) aren’t yet as intelligent as a four-year-old, suggesting that human-level AI remains distant.
However, LeCun did say that generative AI is one of Meta’s five core pillars, and the company is working on next-generation AI systems with the goal of achieving human-level intelligence. One such project, internally called Advanced Machine Intelligence (AMI), aims to address the limitations currently seen in LLMs.