Qualcomm Zeroth: A Cognitive Computing Platform

qualcomm-deep-learningQualcomm has launched Zeroth, a “cognitive computing platform” based on deep-learning algorithms that would make all kinds of devices (understand Qualcomm-based ones, Snapdragon or otherwise) much more aware of the context in which they are being used, but not limited to that. Deep-learning makes it possible to teach many types of patterns to computers, basically teach them to recognize different things, which they can use to make their own decisions and take action without requiring user intervention.

The most obvious example of deep learning is Visual Recognition: the computer can recognize things it sees through a camera. Some years ago, there was a need to have a very close 1-to-1 match to have a positive recognition. This means that a computer may recognize a Honda Civic, but not a Toyota Corolla. These days, computers are taught what a wheel is, what headlights are etc… and that the ensemble is a vehicle. From there they can start recognizing any type of cars or truck.

During its MWC press conference, Qualcomm has demonstrated how the camera could see if the photo subject was a person or maybe a food plate, and potentially adapt itself with the best possible settings. This is an example that everyone can probably relate to.

This is however not the only one. Qualcomm also mentioned an Intuitive Security approach in which the computer could recognize a “hacker behavior” by looking at attempts to crack the security protocols. This would work in stark contrast with the simple approach of finding a breach before plugging it, and would be the equivalent of spotting a suspect person who may be a burglar.

"THE POSSIBILITIES ARE SIMPLY ENDLESS"More possibilities have been thought out by Qualcomm, but the possibilities are simply endless. Here are a few more that were mentioned at MWC 2015: Intelligent Connectivity (automatically adapts to the network environment), Always-on awareness (use the motion sensors to acquire situational awareness), Immersive multimedia (analyze sound to optimize the user experience), Speech and audio recognition (determine environment situation through sound analysis), Natural device interactions (gesture and expressions).

I gathered a little more information about what hardware is required to achieve some of this, and there are multiple things to take into account: some of the context awareness may require a DSP (sensor data fusion), but most of the heavy deep learning lifting is done on the CPU. The good news is that it gives Qualcomm quite a few options as to where this will run, although it’s fair to say that the high-end chips would be a natural choice. The point being: Snapdragon 820 should not be a “requirement”, from a technical perspective.

Since this does not rely on GPU computing, there are no restrictions when it comes to OpenCL or other compute API support. Why not use the GPU you may ask? Qualcomm told me that they have looked at how to best implement this on their chips, and having the CPU cores do it was the best solution. It’s not uncommon to see compute problems get real GPU gains only when the dataset is huge, so I can see where they are coming from.

You May Also Like

Related Articles on Ubergizmo

Popular Right Now

Exit mobile version

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version