As much as we try to give our AI human-sounding names, due to the way AIs are programmed to speak, it is pretty obvious when you hear it. However Amazon is looking to change that by introducing SSML support for Alexa. This will allow developers to introduce more “human” features to Alexa to make it sound more expressive.

According to Amazon, “Speech Synthesis Markup Language, or SSML, is a standardized markup language that allows developers to control pronunciation, intonation, timing, and emotion. SSML support on Alexa allows you to control how Alexa generates speech from your skill’s text responses. You can add pauses, change pronunciation, spell out a word, add short audio snippets, and insert speechcons (special words and phrases) into your skill. These SSML features provide a more natural voice experience.”

It also seems that developers looking to take advantage of the feature won’t have to do a lot of work as it appears it merely involves the use of tags. With these tags, developers can get Alexa to whisper, which could be useful if your settings suggest that you want to keep the volume down. It can even bleep out expletives, and developers can even opt to substitute what Alexa is programmed to say. It’s a pretty great idea, especially for when the robots eventually do enslave us, at least they’ll be able to do so more expressively.

Filed in General. Read more about , and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading