OpenAI Introduces 'Advanced Voice' Mode To ChatGPT With More Voices, And More

ChatGPT Advanced Voice

In the era of generative AI, the arms race where where tech companies race to create the best of the best generative AI tools.

Following OpenAI revelation of ChatGPT, tech companies, large and small, have entered a fierce battle for supremacy. OpenAI as the pioneer in this competition, is seeing a lot of powerful competitors.

To remain relevant in this fast-paced race, OpenAI announced that it is rolling out what it calls the 'Advanced Voice Mode.'

The feature is essentially an audio feature, which makes ChatGPT more natural to speak with.

And the release of this feature marks the debut of what has been long delayed.

This is because the feature was too good to be true, that it created more controversies that it had to, long before its official debut.

Advanced Voice Mode, which runs on the powerful GPT-4o model, allows users to forgo written text prompts and speak directly with the chatbot as they would another person.

The early version ChatGPT’s Advanced Voice Mode can even understand and respond with emotions and non-verbal cues, moving interactions much closer to real-time, natural conversations with AI.

While people love this kind of AI, the feature that was first announced during OpenAI’s Spring Update event, was met with backlash, especially because its initial iteration sounded similar to Her, in which the science fiction film features a virtual companion voiced by actress Scarlett Johansson.

Soon after realizing that ChatGPT has a voice mode called Sky that sounded like her, the actress threatened OpenAI with legal action over the similar-sounding voice.

This prompted the company to remove the sound from its library.

Because of this, the official release of Advanced Voice Mode lacks Sky.

Instead, OpenAI is giving ChatGPT five new voices that users can try out: Arbor, Maple, Sol, Spruce, and Vale.

This brings ChatGPT’s total number of voices to nine.

Read: OpenAI Delays 'Voice Mode,' But Leaked An Advanced GPT-4o-Powered Voice Demo

What’s more, OpenAI is also embedding both Custom Instructions and Memory to Advanced Voice Mode.

What this means, users can personalize how this audio feature of ChatGPT responds to them, and remember conversations to reference later on.

OpenAI is also incorporating these two features into its Advanced Voice Mode to make it more in line with the rest of the text-based chatbot experience.

Initially, the Advanced Voice Mode is rolling out to all paying users of ChatGPT’s Plus and Teams tiers, with Enterprise and Edu customers to soon follow.

And to make this Advanced Voice Mode feature stand out, it is represented by a blue animated sphere, instead of the usual animated dots of the Voice Chat feature.

According to OpenAI, the feature is not yet available in several regions, including the EU, the U.K., Switzerland, Iceland, Norway, and Liechtenstein.

Published: 
25/09/2024