
The companies developing the biggest artificial intelligence foundation models—like OpenAI, Google and others—-are becoming less and less transparent according to a study by Stanford HAI (Human-Centred Artificial Intelligence).
The Stanford HAI on Wednesday released its Foundation Model Transparency Index (FMTI). The index evaluates transparency based on 100 different indicators that includes how companies build their foundational model, how it works and how it is used by others. Teams from Stanford, MIT, and Princeton assessed 10 major model companies using this 100-point index and found that a lot of improvement is requirement.
A lack of transparency is not a new phenomenon in the digital technology industry. From deceptive ads to unclear wage practices in aggregator apps to opaque content moderation systems on social media platforms, transparency issues have remained a mainstay of the silicon valley.
“As AI technologies rapidly evolve and are rapidly adopted across industries, it is particularly important for journalists and scientists to understand their designs, and in particular the raw ingredients, or data, that powers them,” said Shayne Longpre, a PhD candidate at MIT, in a press statement.
Transparent is also important for policymakers to make better decisions. Foundation models can raise significant questions surrounding energy use, labour practises, intellectual property, bias and more. “If you don’t have transparency, regulators can’t even pose the right questions, let alone take action in these areas,” added Bommasani.
Among the top 10 companies building foundational models, Meta scored the highest for its Llama 2 model with 54. BigScience, the company behind BLOOMZ came at second with 53 with OpenAI and ChatGPT at third with 48. But none of these are good marks according to the researchers.
Bommasani asserts that this does not mean Meta should be seen as a goalpost for other companies to move towards. Rather, everyone should be trying to get to 80, 90, or possibly 100.