skip to content
Advertisement
Premium
This is an archive article published on July 28, 2023

Google, Microsoft, Anthropic & OpenAI collaborate for responsible use of AI: Here’s what it means

The tech giants formed the Frontier Model Forum to ensure that AI developments are responsibly dealt with and deployed.

Tech giants unite for responsible AIGoogle in its blog said that the body will draw on the technical and operational expertise of its member companies to benefit the AI ecosystem. (Image: Pixabay)
Listen to this article
Google, Microsoft, Anthropic & OpenAI collaborate for responsible use of AI: Here’s what it means
x
00:00
1x 1.5x 1.8x

Along with the proliferation of Artificial Intelligence, the one thing that has consistently raised concerns is the responsible use of the technology that has been making strides worldwide. While several nations are in the process of laying a framework to govern the responsible use of AI, big tech companies are working towards a collective to foster responsible implementation of AI.

Even as the clamour around the regulation of AI grows, recently Google, Anthropic, Microsoft, and OpenAI formed an AI safety body. Known as the Frontier Model Forum, the body promotes the safe and responsible development of AI models. With the safety body, these tech giants have made it apparent that for them it’s not just about creating path-breaking technologies but also ensuring that they align with the rest of the world.

Google in its blog said that the body will draw on the technical and operational expertise of its member companies to benefit the AI ecosystem by advancing technical evaluations and benchmarks, developing a public library of solutions to support industry best practices and standards.

Story continues below this ad

The body also has four core objectives – advancing AI safety research; identifying best practices; collaborating with policymakers, academics, civil society, and companies; and supporting efforts to develop applications that can help meet society’s greatest challenges.

The body also has criteria for membership. It is open to organisations that develop and deploy frontier models (as defined by the forum); demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches; and are willing to contribute to advancing the forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.

According to Google, the body aims to promote the responsible development and deployment of frontier AI models by encouraging discussions and actions on AI safety and responsibility. The forum will focus on key areas such as 1. identifying best practices to mitigate potential risks by encouraging knowledge-sharing among governments, industries, civil society, and academia; 2. Advancing AI safety research by coordinating efforts in domains such as interpretability, adversarial robustness, and oversight; 3. facilitating secure information sharing among governments, companies, and various stakeholders to address AI safety and risks. The body will also be collaborating with the ongoing efforts of various governments, organisations, and other international bodies.

Latest Comment
Post Comment
Read Comments
Advertisement

You May Like

Advertisement