Google Translate already does a pretty decent job at translating one language to another, but what about non-written methods of communication, such as sign language? Sign language is a skill that has to be learned, but according to Google, they have managed to develop an AI that is capable of interpreting those signs and converting it into speech.


This means that in theory, this piece of software would allow those with speech issues to communicate more easily with others who might not know sign language. This is done by using a camera coupled with software that can track the movement and gestures of the user’s hand and interpret it accordingly.

What makes this even more unique and also highlights how far our smartphones have come, is that Google managed to achieve real-time performance on a smartphone. This is versus the past where it would require more powerful desktop computers, suggesting that this technology could be used while on the go.

That being said, the technology isn’t quite perfect yet, where it only captures part of the conversation as other things such as facial expressions, the speed of signing, and also regionalisms could be missed out.

According to Google, “We plan to extend this technology with more robust and stable tracking, enlarge the amount of gestures we can reliably detect, and support dynamic gestures unfolding in time. We believe that publishing this technology can give an impulse to new creative ideas and applications by the members of the research and developer community at large.”

Filed in General. Read more about and . Source: ai.googleblog

Related Articles on Ubergizmo