Autonomous or self-driving cars can sometimes make mistakes. It all comes down to whether the AI that’s powering the self-driving system has trained enough to handle a particular situation on the road. Since these gaps in knowledge can result in dangerous mistakes from cars that drive themselves, Microsoft and MIT have come up with a solution to essentially fill in these gaps and detect the artificial intelligence blind spots in self-driving cars.

The model that they have developed more or less makes the AI compare a human’s actions in a particular situation to what it would have done at that time. It can then change its behavior based on how accurately it matches the response.

For example, you know that you have to pull over if you see an ambulance coming up behind you lights blazing, an autonomous vehicle’s AI might now. This model will basically make the AI learn this behavior by watching a human driver pull over to the side of the road when an ambulance comes up behind. It can then replicate that behavior in a similar scenario at any point in the future.

The same teaching model will also be useful for corrections in real-time. For example, if the AI makes an incorrect call in a situation, the human driver can take control and make the correct call. This would teach the AI what to do in a similar scenario in the future.

The researchers have also developed a way to ensure that the AI doesn’t mark all instances of a particular response as safe. To prevent that, a machine learning algorithm will judge the acceptable and unacceptable response while using probability calculations to look at the patterns and decide whether the call being made is completely safe or could potentially cause problems.

This model hasn’t been tested in the real world as yet so it will be interesting to see what the results will be when Microsoft and MIT test it on real cars.

Filed in Transportation. Read more about , , and . Source: news.mit.edu

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading