It is no secret that there is a bit of fear of how AI could potentially lead to a robot uprising. It might seem like the stuff of movies, but given how smart computers are getting, where in some instances they are actually beating us at our own game. While no one can predict the future, we guess prevention is better than cure, which is what Google and OpenAI are trying to do.

Google’s DeepMind and OpenAI have recently released a research article (via Engadget) in which they have outlined a new method of machine learning where it takes cues from humans, as opposed to letting it think on its own which can sometimes lead to undesirable consequences.

For example the article outlines how one of the main problems involved in AI is when it learns that cheating can sometimes be the most efficient way of achieving maximum reward, which is what happened when OpenAI tried to get the AI to play a game, in which the AI cheated by scoring points by driving around in circles as opposed to completing the course.

So by employing reward cues from humans, as opposed to an automatic reward system as long as goals are met, the future of AI could eventually get to the point where it could behave in a way that meets the goal while satisfying our preferences as humans. The downside to this current method is that it does require a lot of human feedback which might not necessarily be ideal or efficient.

Filed in General. Read more about and .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading