The UK hosted a major artificial intelligence (AI) summit this week, bringing together political leaders and tech experts to discuss both the promise and potential perils of this rapidly advancing technology. As AI continues to grow, countries around the world are trying to get ahead of the curve and establish some ground rules. The summit resulted in some important declarations and initiatives that give us a glimpse into the future of AI governance. Let’s break down the critical highlights.
A big focus of the gathering was on establishing global coordination and standards around AI safety. This led to the signing of the new Bletchley Declaration, which was agreed to by 28 countries, including heavyweights like the US, UK, and China.
The declaration lays out plans for greater transparency from AI developers regarding safety practices and more scientific collaboration on understanding AI’s risks. It’s being hailed as a landmark achievement in getting the world’s AI leaders aligned on managing the dangers of AI posed on the domains of daily life from “misuse or unintended issues of control relating to alignment with human intent.” While a bit vague in details, it’s seen as an important first step towards creating international norms and mitigation strategies.
US Vice President Kamala Harris gave a speech highlighting current harms from AI, like discrimination, misinformation, and democratic challenges, saying that they are already affecting vulnerable populations. She announced the Biden administration will take steps to manage AI’s societal risks and regulatory challenges.
Harris stressed that in addition to existential threats, we need to address AI dangers already affecting marginalized groups and democratic institutions. Her remarks signalled a focus on AI ethics and consumer protections from the US government.
As the CEO of Tesla and SpaceX, Elon Musk has been vocal about his fears of AI getting out of human control. He reiterated those concerns at the summit, describing advanced AI as “one of the biggest threats to humanity” given its potential to become far more intelligent than people.
“So, you know, we’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us,” he said at the summit.
While he hopes to guide AI’s development responsibly, he admitted we may not be able to control such an entity. Although “we can aspire to guide it in a direction that’s beneficial to humanity”.
At the AI summit, Mustafa Suleyman, the co-founder of DeepMind, the UK-based AI company that Google bought and made its core AI division, suggested that a temporary halt in AI development might be necessary in the near future. He told journalists that this question would have to be taken very seriously in the next five years or so.
However, he also assured that the current state-of-the-art AI models, such as ChatGPT, were not a major risk. He said: “There is no proof today that cutting-edge models like GPT-4 … cause any significant or disastrous harms.”
The UK government announced a major £225 million investment into a powerful new supercomputer called Isambard-AI. It will be built at the University of Bristol and is intended to achieve breakthroughs across healthcare, energy, climate modelling and other fields. Along with another planned supercomputer called Dawn, these systems are part of the UK’s aim to lead in AI while partnering with allies like the US. These computers will be brought online next summer.
With major players like the US, EU and China also vying for AI leadership, it’s clear there’s a high-stakes technological arms race at play. While the UK summit focused on cooperation and safety, each region wants to dictate the rules and standards for AI in alignment with their economic and political goals.
President Joe Biden said, “America will lead the way during this period of technological change” after signing an AI executive order on October 30, even as the EU is aggressively drafting AI regulations. And China has unveiled its own policies to shape AI’s trajectory. But with developing frameworks like the Bletchley Declaration, perhaps these rival powers can together prevent unchecked AI from spiralling out of control.