An open letter
calling for a ban on lethal weapons controlled by artificially
intelligent machines was signed last week by thousands of scientists and
technologists, reflecting growing concern that swift progress in
artificial intelligence could be harnessed to make killing machines more
efficient, and less accountable, both on the battlefield and off. But
experts are more divided on the issue of robot killing machines than you
might expect.
The letter, presented at the International Joint Conference on Artificial Intelligence
in Buenos Aires, Argentina, was signed by many leading AI researchers
as well as prominent scientists and entrepreneurs including Elon Musk,
Stephen Hawking, and Steve Wozniak. The letter states:
“Artificial Intelligence (AI) technology has reached a point
where the deployment of such systems is—practically if not
legally—feasible within years not decades, and the stakes are high:
autonomous weapons have been described as the third revolution in
warfare, after gunpowder and nuclear arms.”
Rapid advances have indeed been made in artificial intelligence in
recent years, especially within the field of machine learning, which
involves teaching computers to recognize often complex or subtle
patterns in large quantities of data. And this is leading to ethical
questions about real-world applications of the technology (see “How to Make Self-Driving Cars Make Ethical Decisions”).
Meanwhile, military technology has advanced to allow actions to be
taken remotely, for example using drone aircraft or bomb disposal
robots, raising the prospect that those actions could be automated.
The issue of automating lethal weapons has been a concern for
scientists as well as military and policy experts for some time. In
2012, the U.S. Department of Defense issued a directive
banning the development and use of “autonomous and semi-autonomous”
weapons for 10 years. Earlier this year the United Nations held a
meeting to discuss the issue of lethal automated weapons, and the
possibility of such a ban.
But while military drones or robots could well become more
automated, some say the idea of fully independent machines capable
carrying out lethal missions without human assistance is more fanciful.
With many fundamental challenges still remaining in the field of
artificial intelligence, however, it’s far from clear when the
technology needed for fully autonomous weapons might actually arrive.
“We’re pushing new frontiers in artificial intelligence,” says Patrick Lin,
a professor of philosophy at California Polytechnic State University.
“And a lot of people are rightly skeptical that it would ever advance to
the point where it has anything called full autonomy. No one is really
an expert on predicting the future.”
Lin, who gave evidence at the recent U.N. meeting, adds that the
letter does not touch on the complex ethical debate behind the use of
automation in weapons systems. “The letter is useful in raising
awareness,” he says, “but it isn’t so much calling for debate; it’s
trying to end the debate, saying ‘We’ve figured it out and you all need
to go along.’”
Stuart Russell,
a leading AI researcher and a professor at the University of
California, Berkeley, dismisses this idea. “It’s simply not true that
there has been no debate,” he says. “But it is true that the AI and
robotics communities have been mostly blissfully ignorant of this issue,
maybe because their professional societies have ignored it.”
One issue of debate, which the letter does acknowledge, is that
automated weapons could conceivably help reduce unwanted casualties in
some situations, since they would be less prone to error, fatigue, or
emotion than human combatants.
Those behind the letter have little time for this argument, however.
Max Tegmark, an MIT physicist and founder member of the Future of Life Institute,
which coördinated the letter signing, says the idea of ethical
automated weapons is a red herring. “I think it’s rather irrelevant,
frankly,” he says. “It’s missing the big point about what is this going
to lead to if one starts this AI arms race. If you make the assumption
that only the U.S. is going to build these weapons, and the number of
conflicts will stay exactly the same, then it would be relevant.”
The Future of Life Institute has issued a more general warning
about the long-term risks posed by unfettered AI, cautioning that it
could pose serious dangers in the future.
“This is quite a different issue,” Russell says. “Although there is
a connection, in that if one is worried about losing control over AI
systems as they become smarter, maybe it’s not a good idea to turn over
our defense systems to them.”
While many AI experts seem to share this broad concern, some see it as a little misplaced. For example, Gary Marcus, a cognitive scientist and artificial intelligence researcher at New York University, has argued
that computers do not need to become artificially intelligent in order
to pose many other serious risks, to financial markets or air-traffic
systems, for example.
Lin says that while the concept of unchecked killer robots is
obviously worrying, the issue of automated weapons deserves a more
nuanced discussion. “Emotionally, it’s a pretty straightforward case,”
says Lin. “Intellectually I think they need to do more work.”
No comments :
Post a Comment