
Microsoft’s new AI team is looking to pursue artificial superintelligence by building world-class, frontier-grade research capability in-house.
“Microsoft needs to be self-sufficient in AI. And to do that, we have to train frontier models of all scales with our own data and compute at the state-of-the-art level,” Mustafa Suleyman, the tech giant’s AI CEO, was quoted as saying by Business Insider.
“We’ve got a huge mission ahead of us. We have $300 billion of revenues, a huge responsibility to make sure that all of our products are AI-first, that we deploy agents everywhere, and we really make all the workflows that customers use today much more intelligent,” the AI chief said.
For years, Microsoft’s approach to AI has been focused on building smaller models, post-training existing models for new purposes, and channeling resources toward OpenAI. However, the deal between Microsoft and OpenAI was revised earlier this month to include new terms on artificial general intelligence (AGI), IP rights, API access, cloud services, and more.
Notably, the revised deal allows Microsoft to independently pursue AGI alone or in partnership with third parties. “If Microsoft uses OpenAI’s IP to develop AGI, prior to AGI being declared, the models will be subject to compute thresholds; those thresholds are significantly larger than the size of systems used to train leading models today,” it said.
While it appears that Microsoft’s hands are no longer tied, its newly formed superintelligence team is likely to face stiff competition from similar units in companies such as Meta, Google, Anthropic, and even its partner OpenAI.
To be sure, Microsoft will remain as OpenAI’s frontier model partner for the next decade or so. It means that the Windows-maker will continue having intellectual property (IP) rights of AI models and products developed by OpenAI till 2032. This now includes models that are rolled out post-AGI as well.
When asked if Microsoft would consider using AI models developed by other AI startups such as Anthropic, Suleyman said, “There’s no reason for us to be religious about that. Obviously, we’re very focused on getting our products working.”
He also emphasised AI safety as a key priority for Microsoft’s superintelligence team. “That should be something we all take for granted, but it actually needs to be stated and repeated, and it needs to be the No. 1 most important thing that humanity focuses on,” Suleyman said.
“There’s a risk with these systems that they get extremely smart and run away from us, and we have to design them so that they don’t do that. That requires a humanist intent, which keeps humans at the top of the food chain,” he added.