nvidia-tegra-auto-car-02The CES 2015 NVIDIA Press conference was very different from past consumer products media events because there was very little gaming involved. The introduction of NVIDIA’s new Tegra X1 chip was a mere introduction to the company’s long-term emphasis for what its mobile chips will be used for: making automobiles smarter and safer.

If you have observed the chip industry a bit, you have probably noticed that the market is largely dominated by a handful of players, mainly Qualcomm and Apple. The former because it is the incumbent player with excellent products and a long track record and relationship with OEMs that is good for both product design and risk management. The latter because it can tailor chips to its very specific needs and commands a huge share of the market. In addition to this, OEMs like Samsung do produce their own SoC, and more are looking at doing exactly that (easier said than done).

The reality is that it is very difficult to enter that space, and there is little appetite for testing things with new players. Also, the truth is that sheer graphics performance isn’t going to woo the average consumer, and therefore be a make-or-break feature for large-volume consumer devices.

audi-virtual-cockpit-ces2014The auto industry is another beast altogether. First, car makers are used to working with specialty suppliers, and although product cycles used to be slow, companies like Audi have managed to accelerate the cycle of electronics integration for things that are not related to controlling the car. Secondly, the power envelope isn’t a real issue and since power is readily available, graphics and computing performance can be pushed without any real concern.


But what do you do with super-computer levels of processing performance? Surely, gaming isn’t really the killer feature for cars. However, multi-megapixel virtual dashboards and computer vision can be. Computer vision is particularly interesting because it can consume huge amounts of calculations in order to see and recognize things that are all around the car. There is a potential to have a computer analyze inputs from multiple high definition cameras, and that’s millions of pixels to process 30 times per second or more. This is real, bad-ass computing. "THIS IS REAL, BAD-ASS COMPUTING"nvidia-tegra-auto-car-06

That’s precisely what NVIDIA needs/wants to pull away from the competition. This type of application is not as “computationally unbounded” as graphics rendering (in my opinion), but it can keep the industry busy for years, if not decades, to come. This is where having 30% or 100% more performance does provide a linear benefit that can truly benefit the user. This is where NVIDIA wants to be.


It all started a few years ago when deep learning and computer vision started to experience huge improvements. NVIDIA has presented on this topic several times during its GPU Conferences, but for many people, yesterday was the first time that it hit home: computers are getting really good at recognizing stuff – even better in some categories than the average human. This is no small feat. Researchers were stuck at simpler computer vision functions for decades.

Now, a computer doesn’t recognize a “car” because it has seen the same one before. It thinks there’s a car because it sees “wheels”, “doors”, “lights” and other sub-section of what makes a car – therefore “it’s probably a car”. This makes an enormous difference because you don’t need to teach it about every type of car under the sun. Show it enough of them, and it will eventually develop a sense (more like a probability) for what a car is.

Beyond the obvious examples, it’s hard to imagine what developers will do when cars will become more aware of their surroundings, but the potential safety improvements are obvious. Electronic sensors can work in the fog, in the dark, see further, see in 360 degrees, etc.… There is no question that this could trigger early warnings or even act if it is determined that humans would be too slow to do so. "A FIELD UNDER INTENSE RESEARCH"

Before we get too excited, this is still a field under intense research, so NVIDIA is still in “investment mode” here, it’s not like it can start selling boatload of chips next month. But the general direction seems correct. Progress in that field suggests that within years, commercial cars will start to embed many of these functions. As they become more reliable, automobiles should play a more active role reacting our surroundings.

But this is not only a hardware play for NVIDIA. The company is actively developing the software and algorithms that will run on top of their Tegra processors. Those libraries will provide the key computer vision building blocks, and an end-to-end solution if the car makers need it. It’s fair to say that car makers aren’t computing or vision experts, and although they are working hard on it, it is possible that NVIDIA would provide an easier, faster and cheaper solution.

Because there are so many things needed to make this work, it is be a business that has a higher barrier of entry – even for a larger competitor, and after a while, it would be quite difficult to catch up in a financially efficient way. At the same time, NVIDIA continues to work on classic mobile products that combine SoC and modem technology, so an eventual break into the smartphone/tablet market remains within reach. NVIDIA is smart to not count on, or wait for it. With powerful partners like Audi, NVIDIA can take the initiative and can make a big difference based on sheer performance – and this is a position that it is much more familiar with. Smart move.

Filed in Computers >Transportation. Read more about , and .