To mark Global Accessibility Awareness Day (opens in new tab) (May 18) Apple previewed a bunch of features (opens in new tab) for iOS 17, without saying it was for iOS 17.
From Assistive Access, which changes the home screen into a row of apps that can be much easier to access for those with visual impairments, to audio improvements, there’s a lot to like here.
All of these features are slated to arrive ‘later this year’, with no mention of a version number in the press release – which gives me the impression to not expect these to arrive in a big iOS 16.8 update in the summer. Rather, iOS 17 and iPadOS 17 will be taking advantage of all of these new accessibility features that Apple clearly wanted to showcase before WWDC in June.
But one of these features that stood out to me the most was Personal Voice. This enables someone with a recent diagnosis of ALS (Amyotrophic Lateral Sclerosis) or other conditions, to speak a number of phrases and phonetics, from which the AI in your iPhone or iPad can construct a natural-sounding personalized voice.
Using this, conversations could be made in person or over an audio or video call with their own voice – and this could be huge for so many in the future.
A game changer for many
I remember when I was a kid, watching Professor Stephen Hawking be interviewed, and he would use a personalized machine that allowed him to quickly bring up and type in commands that would result in a robotized voice.
While it inspired me in how technology could help others, way before AI, the iPhone, and more, this method always intrigued me. But seeing this Personal Voice feature on Wednesday, made me realize that this is the next step of what Hawking had been using for years.
In phone calls, in normal conversation, in just asking for a coffee from the nearest bar, it could bring independence to so many who have lost the ability to use their voice.
But there have been concerns recently about how AI could be used to generate deep fakes of individuals who have since passed. John Gruber of Daring Fireball showcased a concerning example (opens in new tab) back in March of AI being used to construct a voice in the vein of Steve Jobs.
It’s terrifyingly natural, and there should be some sort of legislation introduced in the future to prevent this from ever being used beyond a test.
But as with new technology – it’s always scary, there are always risks associated with the unknown. And, as social media likes to do, the risks can be overblown.
AI being used for good
This is why Personal Voice is a great example of AI being used for good. We’ve seen other examples this year, with apps such as MacWhisper and Petey being used to help users with queries and transcriptions. These AI apps save hours and wasted effort while you can do something else more worth your time.
Being able to use AI to help construct a voice that you may never be able to use again from a handheld or tablet, is yet another example of where AI can help those in need, and Apple knows it.
That’s also not forgetting about its long-rumored VR headset. The possibilities of using these features in VR and AR are exciting. You can easily imagine Detection Mode in Magnifier on this headset – you walk around with this on, and Siri can help guide you to certain objects in your home.
All signs point to these upcoming features being showcased at WWDC in June, and while there were no mentions of new accessibility features for Apple Watch in this press release, I’m hoping that these announcements are only the start of what’s going to be showcased for accessibility on Apple’s devices.