The OpenAI platform is getting some major changes as the company announced four updates at its ‘DevDay’ in San Francisco on Tuesday (October 1).
This includes ‘Realtime API,’ ‘Prompt Caching,’ ‘Model Distillation,’ and support for fine-tuning.
The team made it so the new additions would be visible and able to be used from the announcement date, with some to undergo further tweaking once feedback has been collected.
Today at DevDay SF, we’re launching a bunch of new capabilities to the OpenAI platform: pic.twitter.com/y4cqDGugju
— OpenAI Developers (@OpenAIDevs) October 1, 2024
OpenAI’s four updates announced
‘Realtime API’
One of the most significant OpenAI updates is ‘Realtime API’ which boasts abilities for developers to “build fast speech-to-speech experiences into their applications.”
The public beta of the API has been launched which has been dubbed as being similar to ChatGPT’s Advanced Voice Model. It will enable “all paid developers to build low-latency, multimodal experiences in their apps.”
Audio input and output in the Chat Completions API have been introduced to support the use cases that don’t require the low-latency benefits of the Realtime API. This means that developers can now pass any text or audio inputs into GPT-4o and the model will respond with their choice of text, audio, or both.
Previously, creating a similar voice assistant experience would have required several steps including using another model to make it happen.
‘Prompt Caching’
To further help those building AI applications, ‘Prompt Caching’ has been announced which will reduce costs and latency. “By using recently seen input tokens, developers can get a 50% discount and faster prompt processing times,” writes OpenAI in a company-wide news release.
It has been automatically applied on the latest versions of GPT-4o, GPT-4o mini, o1 preview and o1-mini, as well as the fine-tuned versions of the models.
‘Model Distillation’
The new ‘Model Distillation’ aims to provide an integrated workflow that can help manage the entire distillation pipeline directly within the OpenAI platform.
“This lets developers easily use the outputs of frontier models like o1-preview and GPT-4o to fine-tune and improve the performance of more cost-efficient models like GPT-4o mini.”
Before this introduction, distillation required multiple manual steps whereas this new feature should be much easier and quicker.
The full suite includes Stored Completions, Evals, and Fine-tuning, all of which were made available on Tuesday.
‘Vision fine-tuning’ is the fourth OpenAI update
OpenAI implemented fine-tuning on GPT-4o previously which has been used by “hundreds of thousands of developers,” but the team says its new ‘vision fine-tuning’ update will now make it possible to fine-tune with images, as well as text.
This image version works in a similar way to what has been seen with text, with developers able to prepare their image datasets to follow the proper format and then upload this to the platform.
This will only be usable for those on the paid usage tiers and is supported on the latest GPT-4o model snapshot.
Featured Image: Via OpenAIDevs X post
The post OpenAI shares four major updates at San Francisco DevDay appeared first on ReadWrite.