At the WWDC 2023, Apple introduced the iOS 17 and gave us a teaser of the features that will be rolled out globally later this year. One of the most unique features from the list includes Personal Voice, which can clone your voice and then store it on your iPhone. Amazingly, it can then freely use it to communicate with others. The feature sounds like a new artificial intelligence (AI) tool, but that is not a term Apple ever uses. So, it is instead being called a machine-learning feature. And if you are wondering about the use case of this feature, it can be used by those who either have disabilities with speech or a condition that stops them from speaking for a longer period of time. The feature is already available in the public beta version of iOS 17. Let us take a closer look at it.
Known as Personal Voice, this feature has been introduced keeping those users in mind who are “at risk of losing their ability to speak — such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability,” as per Apple's blog post that announced the feature.
This feature seamlessly integrates with Live Speech, another new feature being introduced by the company. Live Speech enables users to type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations. Essentially, it is a text-to-speech app. But with Personal Voice, Apple has added another layer of personalization to it.
Apple claims that users will have to read a randomized set of text prompts to record 15 minutes of audio on their iPhone to set up Personal Voice. Once done, the on-device machine-learning capabilities will create a voice clone for the
Read more on tech.hindustantimes.com