Following its partnership with NVIDIA, OpenAI successfully signed a deal with rival AMD. (Image: AP Photo/Eric Risberg)OpenAI Superintelligent Systems Risks: OpenAI has said that while superintelligent systems will bring many benefits, they will also carry risks that could be “potentially catastrophic”.
In order to mitigate these harms, the ChatGPT-maker suggested conducting empirical research on AI safety and alignment, including whether the entire AI industry “should slow development to more carefully study these systems.” The company also warned that the industry is moving closer to developing “systems capable of recursive self-improvement.”
“Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work,” OpenAI said in a blog post on November 6.
With these remarks, OpenAI appears to be hinting that continual learning in AI systems might be on the horizon. Recursive self-improvement or continual learning has been repeatedly identified as the major roadblock on the path to artificial general intelligence (AGI), a hypothetical level of intelligence where AI systems can perform tasks better than humans.
Just last month, Prince Harry and his wife Meghan Markle joined prominent computer scientists, economists, artists, evangelical Christian leaders, and American conservative commentators such as Steve Bannon and Glenn Beck to call for a ban on AI “superintelligence” that threatens humanity.
However, AI research scientist Andrej Karpathy has said that AGI might still be a decade away since there are a lot of issues that are yet to be worked out. “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues,” Karpathy said in a recent appearance on a podcast hosted by Dwarkesh Patel.
Meanwhile, OpenAI has said that it does not expect typical AI regulation to be able to address potential harms arising from superintelligent AI systems.
“In this case, we will probably need to work closely with the executive branch and related agencies of multiple countries (such as the various safety institutes) to coordinate well, particularly around areas such as mitigating AI applications to bioterrorism (and using AI to detect and prevent bioterrorism) and the implications of self-improving AI,” the blog post read. OpenAI also provided the following recommendations to achieve “a positive future with AI”:
Information-sharing: Research labs working on AI frontier models should agree on shared safety principles and to share safety research, learnings about new risks, mechanisms to reduce race dynamics, and more, OpenAI said.
Unified AI regulation: The AI company advocated for minimal additional regulatory burdens for developers and open-source models, “and almost all deployments of today’s technology” while cautioning against patchwork legislation across 50 US states.
Cybersecurity, privacy risks: Promoting innovation, protecting the privacy of conversations with AI and defending against misuse of powerful systems by bad actors can be achieved by partnering with the federal government, OpenAI said.
AI resilience ecosystem: It recommended building an AI resilience framework similar to the cybersecurity ecosystem comprising software, encryption protocols, standards, monitoring systems, emergency response teams, etc, that are designed to protect internet users. “We will need something analogous for AI, and there is a powerful role for national governments to play in promoting industrial policy to encourage this,” OpenAI said.
Striking a slightly more optimistic note, OpenAI said that it expects AI systems to be capable of making “very small” scientific discoveries by 2026. “In 2028 and beyond, we are pretty confident we will have systems that can make more significant discoveries,” it added.
Regarding the impact of AI on jobs, OpenAI acknowledged that “the economic transition may be very difficult in some ways, and it is even possible that the fundamental socioeconomic contract will have to change.” “But in a world of widely-distributed abundance, people’s lives can be much better than they are today,” it said.