Quantcast

Elite Daily

How ‘Killer Robots' Could Actually Be The Key To Peace In The Future

Killer robots are a hot topic these days. From Hollywood and Silicon Valley to the military and academia, it seems as though everyone is talking about them.

Many prominent thinkers and popular public figures, including Neil deGrasse Tyson, Stephen Hawkins and Elon Musk, have issued various warnings about the dangers of artificial intelligence.

Back in 2014, Musk argued:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it's probably that.

With artificial intelligence we are summoning the demon.

Likewise, movies like “Ex Machina” and “Avengers: Age of Ultron,” both currently in theaters, would likely have you believe artificial intelligence will spell the end of humanity.

They both feature (spoiler alert) exceptionally intelligent and lethal human-like robots that would definitely pass the Turing Test (when a computer becomes so intelligent it's ostensibly human).

But, luckily, we're very far away from this type of technology, right?

Not exactly.

It's highly unlikely we'll be seeing anything like Arnold Schwarzenegger in “Terminator” anytime soon, but artificial intelligence is developing rapidly and we already have technology that operates autonomously. This has raised concerns about the ultimate capabilities and safety of these technological advances, particularly in relation to warfare.

What happens when machines, or robots, are given the authority to select and engage enemies on their own? What are the implications of computers making decisions without human intervention?

These questions are especially pertinent in the ongoing discussion surrounding Unmanned Aerial Vehicles (UAVs), or drones.

Since 2002, the United States has utilized drones and drone strikes in counterterrorism operations. When President Obama entered office, he increased their use exponentially.

Americans aren't fond of “boots on the ground,” especially given the disastrous results of the wars in Iraq and Afghanistan. At the same time, terrorism is still considered to be a serious existential threat. Drones appear to offer a solution — they keep boots off the ground while still combatting terrorism.

Drones are used as both a tool of surveillance and death, targeting and killing suspected terrorists. Currently, drones are operated remotely by pilots in trailers, often in Nevada, thousands of miles from where they're employed. So even if a drone is shot down, the pilot remains safe and sound.

In terms of robotic warfare, there are also robots that defuse bombs and assist medics and others that can be mounted with cameras and machine guns. Indeed, robotic technology is making huge strides at present.

Accordingly, P.W. Singer, one of the preeminent scholars on 21st warfare, suggests the future of war will be robotic, stating:

The rise of the robot on the modern battlefield has happened so fast, it is almost breathtaking — that is, if you are not a robot yourself.

When the US military invaded Iraq just over a decade ago, it only had a handful of unmanned systems, aka drones, in the air, and zero deployed into the ground forces… Today, its inventory in the air numbers well over 7,000.

But, there is a great deal of apprehension over lethal weapons operating autonomously, particularly drones, as artificial intelligence and robotic technology continues to advance.

Machines with these capabilities are known as lethal autonomous weapons systems (LAWS) or, in simpler terms, killer robots.

It's important to note there are already autonomous weapons systems, as the New York Times highlights, and many countries already have automated missile defense systems, including the United States.

But weaponized machines like drones do not currently operate autonomously, human input is required for both selecting and engaging targets. Even though drones don't carry pilots, there are still humans operating them from afar.

The worry, however, is that if we remove the human element from the operation of drones and other weaponry, allowing artificial intelligence to take the helm, it could lead to disaster.

Dr. Stuart Russell, a UC Berkeley professor of computer science, recently contended,

LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behavior.'

If we allow robots to select targets themselves, it would arguably be immoral and dangerous. They could begin to see almost anyone as a potential threat, targeting and killing humans indiscriminately.

Thus, there are a lot of questions surrounding the legality, morality and safety of lethal autonomous weapons systems.

Conceivably, when we can outsource combat to killer robots, it means accepting war becomes much easier. War is difficult to bear because of the human cost. When that's eliminated and war becomes as simple as pushing a button, it's not difficult to imagine a world perpetually consumed by conflict.

What will the implications for the world be if killer robots replace the majority of human soldiers? That's the fundamental question.

But perhaps those who fear the rise of robots are missing one crucial detail: Humans are the most destructive entities on the planet.

Robotic weaponry, like drones, are currently deadly because humans use them to kill other humans. They're not operating independently; it's humans making the kill shot.

US drone strikes have killed numerous civilians because of human decisions.

It's humans who lack morality. We have made warfare a perpetual aspect of civilization, in spite of the fact it's the antithesis of what it means to be civilized.

This is precisely why Rosa Brooks, law professor at Georgetown University, recently contended:

I'm here to tell you that killer robots are getting a bad rap — and ethicists and rights advocates are being far too generous in their assumptions about human beings.

…Let's not romanticize humans. As a species, we're capable of mercy and compassion, but we also have a remarkable propensity for violence and cruelty.

Brooks presents a valid case, and one that many of those currently rallying against killer robots seemingly ignore. She notes humans are “easily flustered by the fog of war,” and our emotions and frailties mean we have a propensity to make awful mistakes in violent conflict.

Comparatively, Brooks argues, computers “don't get mad, they don't get scared,” and are phenomenal at rapidly processing complex information and capable of making calculated decisions under duress.

In other words, killer robots, as frightening as they sound, could actually help institute world peace, or at least a proportionate amount of tranquility.

Correspondingly, Michael C. Horowitz and Paul Scharre, respective experts on political science and robotic warfare, recently penned an op-ed in which they argue autonomous weapons systems have great potential to limit civilian casualties in war.

They warn, however, humans must always retain moral responsibility and accountability for their use and play an attentive and direct role in their operation.

Simply put, killer robots could make war less deadly for humans, as long as they don't have too much independence. Standards of transparency and accountability must also be upheld by human operators.

In April, 90 countries and a myriad of NGOs met at a United Nations conference in Geneva to discuss lethal autonomous weapon systems. A number of the NGOs urged the UN to ban killer robots before it's too late.

Such a move would be premature given we hardly understand this technology and can't fully conclude whether its widespread use would be overwhelmingly positive or negative.

A more appropriate route would be to place limitations on the current use of drones and drone strikes by the United States. The US drone program, in its current form, stigmatizes robotic technology and complicates this entire debate.

America's present use of drones and drone strikes in combat situations is debatably immoral, illegal and ineffective. It's arguably setting a very dangerous precedent and the international community must work to address this.

In having a concerted discussion about America's use of drones in combat, we might arrive at a more lucid conclusion about killer robots and artificial intelligence.

Citations: COMPUTING MACHINERY AND INTELLIGENCE (AM Turing ), A Day Job Waiting for a Kill Shot a World Away (The New York Times ), The Rapid Advance of Artificial Intelligence (The New York Times ), The future of warfare will be robotic (CNN), UN Report Singles Out Two Navy Weapons Programs (USNI News), In Defense Of Killer Robots (Foreign Policy), The Morality of Robotic War (New York Times ), Elon Musk With artificial intelligence we are summoning the demon (Washington Post), Robotics Ethics of artificial intelligence (Nature ), UK opposes international ban on developing killer robots (The Guardian ), Should We Fear Artificial Intelligence The Experts Cant Seem To Agree (Huffington Post)

Subscribe to Elite Daily's official newsletter, The Edge, for more stories you don't want to miss.

John Haltiwanger

Editor

John Haltiwanger is the Senior Politics Writer at Elite Daily. He was born and raised in DC. John earned an MSc in International Relations from the Univ. Of Glasgow and a BA in History from St. Mary's College of MD. He loves life, and burritos.
John Haltiwanger is the Senior Politics Writer at Elite Daily. He was born and raised in DC. John earned an MSc in International Relations from the Univ. Of Glasgow and a BA in History from St. Mary's College of MD. He loves life, and burritos.

Why Guys Need To Go On More Man Dates

Comments