OpenAI rolls out Advanced Voice Mode for ChatGPT

Sanjana Dhar
Sanjana Dhar July 31, 2024
Updated 2024/07/31 at 1:54 PM

Weeks after dazzling the world with its Her-like voice interface, OpenAI has finally begun rolling out its enhanced voice mode.

ChatGPT’s Advanced Voice Mode

As of today, the company has begun rolling out the feature to a small number of ChatGPT Plus subscribers. When it was introduced at its Spring Update event with GPT-40, OpenAI drew criticism for its voice mode’s resemblance to Hollywood actress Scarlett Johansson, who lent her voice to the AI system in Spike Jonze’s “Her.” The enhanced mode was expected to be released in alpha form sometime in June, but OpenAI delayed the rollout by a month.

The new voice mode is just not ChatGPT with a voice. During the event, OpenAI employees demonstrated how they could hold conversations like humans, participate in group conversations, and adapt to the conversational style around them. The delay in launching Enhanced Mode is due to OpenAI continuing to work on improving the model, particularly its ability to detect and reject certain content.

Old voice model VS Advanced Voice Mode

Back in May, when the company first introduced the voice model, it was criticized by some quarters for its uncanny resemblance to Johansson’s voice. ChatGPT now has a voice mode available, however, it is completely different from Advanced Mode. Meanwhile, the old speech model relied on three separate models: one for converting speech to text, another for converting text to speech, and GPT-4 for processing prompts. However, GPT-4o is multimodal and capable of performing a variety of tasks.

 

For more information please keep reading techinnews

 

Share this Article