OpenAI announced a plethora of updates and new developer products at its DevDay, including the introduction of the GPT-4 Turbo model, which boasts a significant performance boost, supports a 128K context window, and comes with reduced pricing. This new model is knowledgeable of events up to April 2023 and is available for developers to preview. The company also unveiled improvements to function calling, allowing for more complex and accurate interactions, and introduced a JSON mode for better integration with developers' needs.
The Assistants API was another major release, aimed at helping developers create AI applications with more natural and goal-oriented interactions. It supports new capabilities like Code Interpreter and Retrieval, as well as function calling. This API also offers persistent and infinitely long threads, easing the burden of context window limitations on developers.
Moreover, OpenAI has expanded its multimodal capabilities by integrating vision into the GPT-4 Turbo, allowing for detailed image analysis and document reading. The DALL·E 3 API was also rolled out for developers, enabling the creation of unique images and designs within their applications. A new text-to-speech API was introduced, providing high-quality speech generation with a variety of preset voices.
For model customization, OpenAI is offering experimental access to GPT-4 fine-tuning and launching a Custom Models program for organizations requiring extensive customization. To make these advanced AI tools more accessible, OpenAI has reduced pricing across several services and increased rate limits for applications. Additionally, a new initiative named Copyright Shield was introduced to protect customers against legal claims of copyright infringement.
In summary, OpenAI's DevDay unveiled major updates that significantly enhance the capabilities and accessibility of its AI models, providing developers with more powerful, efficient, and customizable tools to innovate within their applications.