Fast forward a few years and you might have this question asked, “What’s the difference between an iPhone and a dog?” The correct answer would be “None”, as they both would be able to identify owners by one’s voice alone, at least one particular piece of speculation goes to plan. This will be in addition to recognizing standard voice commands, as future iPhone software might just be able to use one’s voice to identify said person, allowing the operating system to perform custom-tailored settings as well as open up access to personal content.
This particular concept was presented to the masses this week via a new patent application that has been published by the U.S. Patent and Trademark Office, where it is known as “User Profiling for Voice Input Processing” – basically letting it describes a system that would identify individual users when they speak aloud.
At the moment, voice control has been implemented in some way or another on a bunch of portable devices, where such systems will feature word libraries that offer a range of options so that users will need to speak aloud in order to interact with the device. Over time, such libraries could grow to be stupendous in size, bogging down the whole voice input process while taking more time to decipher, hence taxing a device’s processor.
Apple hopes to avoid this necessary evil with their system that knows a user via one’s voice, and will be able to execute instructions based on that user’s identity. This makes it possible to be more efficient in accessing the iPhone, for example, in a hands-free manner.