While in CEATEC/Japan, we took the opportunity to get a ride at small demo track inside the Makuhari convention center in Tokyo. The goal was to look at how the automated Nissan car would react to different road situations. Recently, the Japanese government has considered approving tests on the Japanese roads, so it is pretty exciting to see what these vehicles had evolved since last year. Nissan’s goal is to have a “multiple and affordable” autonomous cars by 2020. Given that Japanese automakers have always been great at bringing new technologies such as EV to the market, we should pay attention to what Nissan has to show. To put things in perspective, by “autonomous car”, Nissan really means that the driver will provide “less inputs” to the car, and that is very different from “no inputs”.
Back to our driving session: in our test, we had a pre-programmed course on the track, and this year, a second car was added to the test, to see how the autonomous vehicle would respond when another vehicle should be given priority at a stop sign, if it blocks the way. The driver was pretty much to explain to us what was going on, and Nissan added several visual clues in the demo car such as red/blue lights on the wheels to alert us when the car detects a difficult situation. All in all, the demo went without a hitch."NISSAN USES MULTIPLE LASERS TO BUILD A 3D IMAGE OF THE ENVIRONMENT"
In a subsequent Q&A sessions, we were told that the car’s data sampling was about 60FPS, and that its maximum autonomous speed could reach 70km/h (44mph). To drive beyond that speed, the data sampling rate needs to be increased, along with the compute capacity to analyze it. At the moment, the computer system is basically a Windows 8.1 PC (I’ve seen a Lenovo laptop in the trunk on a different occasion), and I understood that this was basically running a Windows application, most likely for ease of development.
Nissan uses an array of sensors, that includes 5 or 6 lasers (this was 5 here, but I’ve heard of another Leaf with 6) to scan the surroundings and create a 3d cloud of dots to let the car “see” in 3D. There are also cameras to provide visual information to the computer system and radars to detect far-away objects. “White paint” (on the road) is a very important technical aspect says Nissan, eluding to the fact that visual (color) information is required to see the outlines of the road, and when I asked, I was told that night-time driving remained challenging even if I supposed that there are some solutions (like self-illuminated infra-red) that could help.
The Nissan project shares similar goals with the Google self-driving car in the sense that humans will interact less and less with the car. Google’s goal seems to have a fully autonomous car, while Nissan would be content with a car that does most of the heavy lifting, while providing a significant-enough added value to be commercially successful. Google commercial plans are not yet clear and under heavy speculation.
"GOOGLE RELIES ON PRE-COMPUTED 15CM-RESOLUTION 3D MAPS OF THE WORLD, NISSAN DOES NOT" The final and most important technological difference between the Nissan and Google cars is that Google’s rely on a pre-computed 3D map of the world, along with sensors to react to immediate situations, while Nissan only uses sensors to react to all situations. This could have implications when it comes to the amount of data that you need to have embedded or stream to the car and this also begs the question about how often the mapping data is updated etc. If Nissan succeeds in using only local sensors, this could make a big difference. That said, Google has now been driving on highways for a couple of years.
Interestingly enough, when cars will be fully autonomous, we probably won’t “need” to “own” one, and cars could become part of a public or private transportation system that we could pay “on-demand”.