Just ahead of tomorrow’s official Intel Developer Conference (IDF) debut [follow the Keynote live at live.ubegrizmo.com@ 9am PT], the world’s largest chipmaker was showing a few of its current R&D projects. This is Research, so not all of them (if any) will make it as products, but some were promising, while others were just cool to watch. Here’s what caught our eyes and ears tonight, and if you are attending IDF this week, you might have a chance to see them in action:

In-vehicle context awareness

Intel's vision of the future
The car should probably say “look at the road” here

The idea here is that an on-board computer has an awareness of what’s going on in the vehicle, but also around it. Inside the car, it uses facial recognition to see how many people are in the car, and it knows if you’re watching the road (or your friend next to you). The computer can also “sense” vehicles around and other things like that, thanks to external sensors or radars. When you put the two together, it is for example possible for the computer to know that you’re looking on the right side, and warn you if another vehicle comes from the opposite side.

Smart computing Islands on everyday surfaces

Intel's vision of the future
There’s a projector up there abd a camera on the side

Based on the analysis of both color and depth images, a computer is capable of recognizing (select) objects on a surface and also track a finger that will act as a pointer for the user interface that is projected onto the surface. Once an object has been recognized, it is possible to create a number of apps, the obvious ones are educational, but this would open the door to many more things.

Fast and low-power facial recognition

Intel's vision of the future
Recognition already exists, but how can you scale it?

This subject of research uses the same basic building blocks than the context awareness project described above. The overall goal is to be able to recognize individuals fairly quickly and have an idea of what they are doing. The end-game is not biometrics, which is the identification of individuals (which focuses on a low error-rate). Instead, it is to give applications the ability to know what people around are doing, so that it can react accordingly. This could be used for future retail spaces, or home applications where a device (the TV?) would react differently if a kid is using it. It could also fetch personal settings for a specific user. The program currently runs on an Atom processor, but the computation can also be offloaded to a cloud if there are a lot of subjects to track.

Cloud-based Ray Tracing for Games

Intel's vision of the future

If you follow our Intel coverage, you probably know that Intel loves ray-tracing (probably because CPUs are very good at it…). This time, the company is showing a ray-tracing demo that’s rendered off-site from a farm of servers (using Intel’s highest-end processors). This is similar to what companies like OnLive are doing with games: because all the computation is performed in the cloud, it can be streamed onto a light device like a tablet or a phone. Unlike a movie, the user can send commands back to the server — to move a player, or the camera. The only fundamental issue of this technology is the network latency that could limit the interactivity. This is usually solved by being close to the datacenter.

Filed in Computers >Events >Top Stories. Read more about and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading