OpenAI conducted the “Spring Updates” event On May 13, in which Mira Murati, the Chief Technical Officer of OpenAI took the stage and addressed the company's latest advancements. During the event, Mira announced the new and powerful flagship AI model, GPT-4o. The new AI model comes with the ability to understand voice, text, and images and it can also talk with different emotions, making it sound like you are simply talking to a human. Know more about GPT-4o and how it works.
Mira Murati made several announcements during the event including the availability of a new ChatGPT desktop app, new ChatGPT features, and the AI model GPT-4o which became the main attraction of the event during the demo session. The “o” in GPT-4o stands for “ omni-model” which has the power to provide the intelligence of GPT-4, but faster and with more capabilities such as text, voice, and vision recognition. During the address, Murati mentioned that the GPT-4o is 2x faster, 50 percent cheaper and has five times higher rate limits than the current GPT-4 Turbo.
GPT-4o is smarter in terms of responding to users' queries. It provides real-time voice responses making it engaging for the users. Earlier, ChatGPT's Voice Mode reportedly came with significant latencies and a lack of information. However, the new model can generate real-time responses in several different modes. During the live event, it was demonstrated how GPT-4o could solve a math equation by providing hints and how it could give a detailed brief of a coding problem, and more.
With GPT-4o, users do not have to wait for the AI model to finish speaking, it can be irrupted by new questions, which was restricted with the GPT-4 Turbo. Additionally, the new model also has voice modulation capabilities which are called “emotive voices,” where users can have human-like conversations with the chatbot. The emotive voice was demoed in the event with GPT-4o narrating a story with different emotions.
The GPT-4o will be available worldwide for
Read more on tech.hindustantimes.com