Sunday, November 12, 2017

Amazons Alexa will soon speak more human like by pausing for breath and whispering

Amazon has announced a number of new Speech Synthesis Markup Language (or SSML) features that are part of Amazon’s tools for developers of Alexa-enabled apps or “Skills” as the company calls them. Developers can now program the pronunciation, intonation, timing, and emotion conveyed through this artificial dictation.

Amazon Echo Dot: 2nd Gen

In order to achieve these changes in Alexa’s voice, five new tags have been added as part of the coding process for Alexa’s skills:

  • Whispering â€" which is an effect that can make anything Alexa says sound like a whisper.
  • Expletive beeps â€" which will bleep out words to make content suitable for all ages.
  • Sub â€" allows you to annotate phrases into abbreviated text â€" it makes more sense from a programmer’s point of view
  • Emphasis - you can tag a phrase to be more emphasized or reduce emphasis.
  • Prosody â€" lets you adjust the rate, pitch, and volume of the spoken phrase.

Amazon is really invested in its Echo product, not to mention the Alexa branding in itself. Because of this, Amazon won’t allow developers the complete freedom to modify the way Alexa speaks on its skills. After all, many legitimate businesses rely on the platform and Alexa represents all of them. Therefore, devs can only “nudge” Alexa’s voice by preset thresholds. This is to make sure no one makes Alexa speak slow where its unnecessary.

These new SSML are available in US and UK English, as well as German. That’s all the languages that the Echo currently supports. However, we’ll need to rely on Skill developers to put the new features to use, so it might be a while before you hear Alexa sound less like a humanoid.

Source | Via

! ( hope useful)

No comments:

Post a Comment