There has been a lot of concern surrounding the use of artificial intelligence for weapon systems. Some well regarded tech leaders have now decided to promise that they will not use the technology to develop “lethal autonomous weapons.” The leaders include the three co-founders of DeepMind, Google’s AI subsidiary, and Elon Musk.

The pledge they have signed warns about the moral and pragmatic threats that AI-powered weapons pose when they are able to select and engage targets without human intervention. The signatories of the view that the decision to take a human life “should never be delegated to a machine,” and that such weaponry would be “dangerously destabilizing for every country and individual.”

This pledge was published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm. It was organized by the Future of Life Institute which is a research institute that wants to mitigate existential risk to humanity.

The signatories include Shane Legg, Mustafa Suleyman, and Demis Hassabis who are the co-founders of DeepMind. SpaceX and Tesla CEO Elon Musk is also on the list alongside Skype founder Jaan Tallinn and some world-renowned artificial intelligence researchers.

The fact remains, though, that this pledge may not have a big impact on international policy. Countries that may covertly be developing autonomous weapons would likely continue to do so even if out of the fear that they don’t want to be the only ones on the battlefield without such advanced weaponry.

Filed in General. Read more about .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading