counter easy hit

You Can Now Use ChatGPT’s Advanced Voice Mode With Real-Time Responses

You Can Now Use ChatGPT’s Advanced Voice Mode With Real-Time Responses
3

ChatGPT with the advanced Voice Mode is now rolling out to some paid users, the company announced on Tuesday. OpenAI first announced the feature in its Spring event in May. Powered by the latest GPT-4o artificial intelligence (AI) model, OpenAI’s advanced Voice Mode offers features such as real-time responses, natural voice, and the capability to sense the user’s emotions. The company said that all ChatGPT Plus users will get the feature by fall this year. However, there is no word on when the video and screen sharing features, which were also demoed at the event, will be released.

OpenAI Rolls Out Advanced Voice Mode for ChatGPT

OpenAI announced the rolling out of the advanced voice capabilities of ChatGPT in a post on X (formerly known as Twitter). The company highlighted that new Voice Mode will allow users to interrupt the AI chatbot at any time and offer more natural interaction with voice modulations. A short video was also shared which highlighted how to turn on the feature once it becomes active.

We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK

— OpenAI (@OpenAI) July 30, 2024

As per the video, the select group of ChatGPT Plus users will see an invite notification at the bottom of the screen prompting them to try the advanced Voice Mode after opening the app. Tapping on it will take the users to a new page with the title “You’re invited to try the advanced Voice Mode” and a button to activate the feature.

The feature is currently available to a small group of Plus users, but the company did not specify any eligibility criteria. Dubbed as alpha roll out, the feature is powered by OpenAI’s latest flagship large language model (LLM), GPT-4o.

Explaining the reason behind the delay, the AI firm said, “Since we first demoed advanced Voice Mode, we’ve been working to reinforce the safety and quality of voice conversations as we prepare to bring this frontier technology to millions of people.”

OpenAI also highlighted that GPT-4o’s voice capabilities has been tested with more than 100 external red teamers across 45 languages. Red teamers are cybersecurity professionals tasked with testing a product or organisation’s security by simulating cyberattacks and jailbreak attempts. The goal of the process is to expose the vulnerabilities in the system before it goes live.

At the moment, you can only access four preset voices after the feature is rolled out to your account. Sky, the controversial voice which allegedly bore close similarities with actor Scarlett Johannson, is yet to be added back to ChatGPT.

Leave A Reply

Your email address will not be published.