Premium
This is an archive article published on March 28, 2024

New AI benchmark tests speed of responses to user queries

MLCommons released new benchmarks focusing on the speed of AI applications and their efficiency. These benchmarks measure how quickly AI models like ChatGPT can respond and generate text-to-image outputs.

chatgpt copilot geminiGoogle, OpenAI, and Microsoft are throwing their best punches with Gemini Advanced, ChatGPT Plus, and Copilot Pro. (Express image)

Artificial intelligence benchmarking group MLCommons on Wednesday released a fresh set of tests and results that rate the speed at which top-of-the-line hardware can run AI applications and respond to users.

The two new benchmarks added by MLCommons measure the speed at which the AI chips and systems can generate responses from the powerful AI models packed with data. The results roughly demonstrate how quickly an AI application such as ChatGPT can deliver a response to a user query.

One of the new benchmarks added the capability to measure the speediness of a question-and-answer scenario for large language models. Called Llama 2, it includes 70 billion parameters and was developed by Meta Platforms.MLCommons officials also added a second text-to-image generator to the suite of benchmarking tools, called MLPerf, based on Stability AI’s Stable Diffusion XL model.

Servers powered by Nvidia’s H100 chips built by the likes of Alphabet’s Google, Supermicro and Nvidia itself handily won both new benchmarks on raw performance. Several server builders submitted designs based on the company’s less powerful L40S chip.

Server builder Krai submitted a design for the image generation benchmark with a Qualcomm AI chip that draws significantly less power than Nvidia’s cutting-edge processors.

Intel also submitted a design based on its Gaudi2 accelerator chips. The company described the results as “solid.”

Raw performance is not the only measure that is critical when deploying AI applications. Advanced AI chips suck up enormous amounts of energy and one of the most significant challenges for AI companies is deploying chips that deliver an optimal amount of performance for a minimal amount of energy.

Story continues below this ad

MLCommons has a separate benchmark category for measuring power consumption.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement