If you have ever seen a yoga class conducted in real-life, or seen one of the instructional videos, basically it involves you, the student, to watch what the instructor is doing and follow accordingly. Unfortunately for the visually impaired, this is out of the question but thanks to a team of computer scientists from the University of Washington, they are hoping to bring yoga to the visually impaired through the use of the Microsoft Kinect device. The program is led by doctoral student, Kyle Rector, and basically takes advantage of the Kinect’s motion detection and skeletal tracking capabilities to detect the user’s body angles, and from there provide auditory feedback on how to perform different yoga poses in real time.
Dubbed Eyes-Free Yoga, the Kinect will base its detection on the desired pose and will instruct the user to change their positioning until the desired pose is reached, and will even give positive feedback once the correct pose is achieved. The instructions will be simple to understand and as it stands, will offer up six different poses such as Warrior I and II, Tree, and Chair. 13 out of 16 participants in the program said that they would recommend the system to others, and that they would all use it again. Rector admits that the Kinect does have some limitations but because of its open source software and wide market availability, that was why she chose it. You can check it out in action in the video above.