Premium
This is an archive article published on August 23, 2023

OpenAI announces fine-tuning for GPT-3.5 Turbo, allowing more customisations for developers

OpenAI has also stated that it will roll out fine-tuning for GPT-4 this fall.

GPT 3.5 Fine Tuning for developersThis feature will allow developers to create unique and differentiated experiences for their uses. (Image: OpenAI)
Listen to this article
OpenAI announces fine-tuning for GPT-3.5 Turbo, allowing more customisations for developers
x
00:00
1x 1.5x 1.8x

Ever since the wild success of ChatGPT, San Francisco-based AI powerhouse OpenAI seems to be on a roll. The company has been announcing updates in rapid succession in the past few months. Now, OpenAI has launched GPT-3.5 Turbo. 

The Sam Altman-led company claims that with a little bit of training, the GPT-3.5 Turbo can essentially outperform GPT-4, its most advanced large language model that is currently endowed to ChatGPT Plus subscribers. A month after it unveiled the customs instructions features to personalise ChatGPT’s response, now GPT-3.5 Turbo allows users to fine-tune GPT-3.5 Turbo to train it on their company’s data and run it at scale.

“This update gives developers the ability to customise models that perform better for their use cases and run these custom models at scale. Early tests have shown a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks,” read the post on the company’s official website. OpenAI also stated that just like with its APIs, data sent in and out of the fine-tuning API is owned by the customer and is not used by OpenAI or any other organisation to train other models. The company also announced that fine-tuning for GPT-4 will be coming this fall. 

What are the use cases of  fine-tuning?

This feature will allow developers to create unique and differentiated experiences for their uses. According to OpenAI, developers will now be able to run supervised fine-tuning to make this model function in the most efficient way for their use cases. 

The company added that in its private beta, fine-tuning customers were able to improve model performance across some common use cases such as improved steerability, reliable output formatting, custom tone, etc. 

Other than enhanced performance, fine-tuning also allows businesses to shorten their prompts all the while ensuring similar performance. The new update can handle 4k tokens, double than the previous fine-tuned models. OpenAI said that fine-tuning is the most powerful when combined with other techniques like prompt engineering, information retrieval, and function calling.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement