Beyond Verbal, a company that specializes in voice to emotion analysis, is making its analytics API available via Beyond mHealth Research Platform.
The API also has an AI component that will help researchers find distinct patterns that would link health issues with voice. Beyond Verbal has done such research by itself, but believes that making its technology available to others will make a much bigger impact.
It is hoped that specific voice markers or pattern can be linked to specific health issues. While using voice to detect signs of depression may not seem so far-fetched, doing so with coronary disease seems much less obvious to the uninitiated.
If Beyond Verbal could work with others to make it a reality, that would change diagnosis as we know it – it would be as simple as speaking. At the moment, the press release does not go into great details about how all this works, but it is a call to action to experts in the field.
Based in Israel, Beyond Verbal originally focused on analyzing the voice to detect emotional states. It’s similar to the eXAudios technology that we had seen at DEMO in 2010. Applying the same kind of technology to healthcare is not illogical, and could lead to new medical tools. Artificial Intelligence can help swiftly go through data much faster than it was possible just half a decade ago. It’s unclear what kind of AI Beyond Verbal is using.
The idea is to record streams of voice, then try correlating them to life events in hopes to finding specific voice changes that could be “markers” that reveal medical conditions. Beyond getting the technology to work, Beyond Verbal needs to enlist the help of experts to gather data. Data is the real fuel that AI needs to “see” these markers in a reliable way.