Sam Altman announcing GPT-4 Turbo (Image credit: OpenAI) Sam Altman introduced OpenAI’s most capable large language model—GPT-4 Turbo on Monday at the company’s first developer’s conference—OpenAI DevDay. The latest edition of GPT also comes with support for Vision, where it now accepts an image as an input/context.
GPT-4 Turbo can accept context with up to 128,000 tokens or 100,000 words, and it is also the most up-to-date model, having been trained with data up to April 2023. In comparison, the context window on GPT-4 is limited to 32,768 tokens. This means that the context windows of GPT-4 Turbo can fit an entire book with 300 pages of text in a single prompt. This means you can input an entire novel and ask GPT-4 Turbo to rewrite it in one go.
The new generative AI model is not just more capable; it is also cheaper to run. While GPT-4 costs $0.03 per 1000 tokens (input), GPT-4 Turbo costs $0.01 per 1000 tokens (input). This means developers can deploy a more capable model at a much lower price.
Do note that there are two variants of GPT-4 Turbo, where one is limited to text input, while GPT-4 Turbo with Vision can understand both text and images.
GPT-4 Turbo and a bunch of other stuff: https://t.co/0gYM9k3mlf
— Sam Altman (@sama) November 6, 2023
It also comes with an enhanced text-to-speech model, where it can generate natural-sounding audio from the text via an API with six preset voices. To make it more useful for developers, OpenAI has added “JSON mode” to GPT-4 Turbo, which helps generate output in a specific format.
GPT-4 Turbo is currently available for developers via API starting today and will be rolled out to the general public with ChatGPT Plus subscription in the next few weeks, which costs $20 a month.