OpenAI has started to roll out its latest Advanced Voice Mode feature to a limited number of paid ChatGPT subscribers for testing purposes. The new feature enables users to experience more natural conversations in real-time. Here's everything you need to know about the new ChatGPT feature.
Advanced Voice Mode Interacts Naturally with Users
Advanced Voice Mode has the ability to respond to natural interactions, including sarcasm, humor, and much more, in real-time. Just like natural human tendencies, users can interrupt during these interactions. While the present ChatGPT voice model transforms a user's speech into text form and vice versa, the new model doesn't do so, resulting in improved one-to-one interactions without delay.
Earlier in May, OpenAI exhibited the capabilities of the Advanced Voice Mode by releasing an AI-generated voice call by Sky, which significantly resembled the voice of Scarlett Johansson. However, Johansson released a public statement on this issue because the voice was developed without her consent. Reportedly, Sam Altman, CEO of OpenAI, made many offers to Johansson to use her voice behind ChatGPT. In the statement, Johansson expressed her "shock, anger, and disbelief" over Altman's move to make a voice sounding "eerily similar" to her own.
Altman clarified that Sky's voice was not created with the intent to sound similar to Johansson's voice. The voice was taken down when Johansson took the matter to legal counsel.
OpenAI shared that since the first recording using Advanced Voice Mode, the company has been working towards enhancing the quality as well as the safety of voice interactions. Presently, Advanced Voice Mode is programmed to speak in four voices and includes an in-built system that detects and blocks voices which don't match these voices. This ensures that the feature does not mimic the voices of celebrities.
OpenAI has introduced “implemented guardrails” that prevent the new voice model from producing voices related to copyrighted
Read more on tech.hindustantimes.com