OpenAI has introduced GPT-4o, the latest and most sophisticated iteration of its AI model, designed to make digital interactions feel remarkably human. This new update aims to enhance the user experience significantly, bringing advanced capabilities to a broader audience.
During the announcement, OpenAI's team demonstrated GPT-4o's new Voice Mode, which promises a more natural and human-like conversational ability. The demo showcased the chatbot's capacity to handle interruptions and modify responses in real-time, highlighting its improved interactivity.
Also read: OpenAI launches GPT-4o with voice, text, and vision capabilities- Know all details
CTO Mira Murati emphasised the model's accessibility, noting that GPT-4o extends the power of GPT-4 to all users, including those on the free tier. In a livestream presentation, Murati described GPT-4o, with the "o" standing for "Omni," as a major advancement in user-friendliness and speed.
The demonstrations included a variety of impressive features. For instance, ChatGPT's voice assistant responded quickly and could be interrupted without losing coherence, showcasing its potential to revolutionise AI-driven interactions. One demo involved a real-time tutorial on deep breathing, illustrating practical guidance applications.
Also read: OpenAI GPT-4o launched: 5 reasons why it's the most powerful AI model and what you can do
Another highlight was ChatGPT's ability to read an AI-generated story in multiple voices, from dramatic to robotic, even singing. Additionally, ChatGPT's problem-solving skills were on display as it helped a user through an algebra equation interactively, rather than just providing the answer.
In a particularly notable demonstration, termed "Be My Eyes," GPT-4o described cityscapes and surroundings in real-time, offering accurate assistance to visually impaired individuals. This feature could be a game-changer for accessibility.
GPT-4o also showcased enhanced personality and conversational abilities compared to
Read more on tech.hindustantimes.com