At GTC 2013, NVIDIA has demonstrated their version of Image Search, which was powered by their massively parallel processors. The goal of computer vision is to be able to recognize and understand what things are – not merely “seeing” them. This is a very intricate problem since computers are fast but fairly dumb (out of the box). They have no notion of “concepts”, unlike humans who can be shown one object, like a hat, then are able to recognize new hats as they see them. That’s because humans can acquire a notion of what a “hat” is.While computer vision has not gotten that far, the demo shows that computer vision can have a notion of patterns, which means that it can find something that (superficially) “look like this”. This may be handy for searching clothing for instance. In the demo, a photo of a dress is being taken and the computer finds other dresses that have the same type of colors or patterns on it. Obviously, the demo was fairly optimized to show off this capability, but it was still quite an improvement on today’s image searches that are mainly based on keywords and colors.
The technology works because it’s possible to analyze an image and build a “signature” that can be used to match images that are similar. What do you think? What should they do with this?