Autonomous weapons and the eventual robot uprising
This past week, Elon Musk, Stephen Hawking and about a thousand other artificial intelligence researchers signed a letter calling for a ban on autonomous weapons.
The remote-operated drones that we use in modern warfare can already fly virtually undetected and use advanced targeting systems to drop bombs on buildings and people below — but the key phrase is “remote-operated.” A human is usually controlling the weapon from afar.
Professor Noel Sharkey teaches robotics and artificial intelligence at the University of Sheffield in the U.K. and is also chairman of the International Committee for Robot Arms Control. He signed the letter and told us why:
“What the big concern is the step where we delegate the decision to kill people to the machine, and that hasn’t been used yet. In the U.K., we have the Taranis, which is a fully autonomous combat aircraft. And that has been tested in Australia, searching for targets on its own. You’ve got the X-47B in the United States, which looks like something Batman would fly. And that’s had very advanced testing. But then China and Russia have developments, and so have South Korea. But America’s still the leader here, as far as we know. You’ve got DARPA there developing things like an autonomous submarine, now in Phase 2 or 3 of testing, which hunts other submarines and sinks them. Then you’ve got the Crusher, which is a fully autonomous 7.5 ton truck with a machine gun on board…. The developments could escalate at any time and take off depending on the type of conflict.”
Ready to build yourself a fallout bunker?
Daniela Hernandez, who writes about AI and autonomous weapons for Fusion, says the debate over AI ethics has been carried on before this letter:
Although the debate has gotten more press lately, thanks to high-profile figures like Musk and Hawking taking notice, it’s been ongoing for some time now. Earlier this year, the United Nations called for an international treaty that would ban fully autonomous weapons. In 2012, Human Rights Watch published a report stating “that such revolutionary weapons would not be consistent with international humanitarian law and would increase the risk of death or injury to civilians during armed conflict.”
Part of that “risk of death or injury” comes from the fact that AI systems make mistakes. Earlier this month, for instance, Google Photos mistook images of black people for gorillas. That’s offensive and awful, but no one died as a result of the software flaw. In military scenarios … people’s lives are on the line.
There’s a lot happening in the world. Through it all, Marketplace is here for you.
You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.
Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.