Killer Robots!

Isaac Asimov puts forward Three Laws of Robotics:Summer Glau, Sarah Conner Chronicles

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Unfortunately these rules are only fiction.

In reality some countries (the UK for example) do have rules limiting use of robots against humans (“no use of lethal force without human intervention”). However, these rules of engagement even where they do exist are fast becoming outdated. As machines become steadily more capable, war is becoming just one more thing that the rich and powerful can get a machine to do for them.

Bad guys gone, let’s eat

According to a report published by the US Airforce last month soon, “Advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.” A scary thought? Or a great relief?

Getting robots to fight on the front lines negates the need for human beings to put themselves in harm’s way. Why send in your nation’s sons (or daughters) when you can send in an automaton? Even if the other side does not have technology advanced enough to afford them such a luxury? Especially if the other side does not have such technology?

Israel's Viper“A significant part of Israel’s defence budget goes towards weapons that minimize the loss of human lives, both Israel’s and its enemies”, says blogger Elder of Ziyon refering to The Viper, a robot designed to fight Palestinian or Lebanese Hezbollah guerrillas. However, what he goes on to say is “In the end, the effectiveness of this hugely expensive robot is roughly similar to that of a Jihadist intent on reaching Paradise” which is where the thoughts begin to get slightly scary. It no longer becomes a case of minimising human casualty so much as justifying it.

In the USA unmanned aerial vehicles (UAVs) enable soldiers to fight a war and be back home in time for dinner, as they can be thousands of kilometres away from the craft they attack with.

Artificial intelligence and robotics professor, Noel Sharkey, recently told the BBC he estimates that between January 2006 and April this year 60 such attacks were carried out in Pakistan – killing 14 al-Qaeda, but 687 civilians.

Besides the obvious, that more civilians were killed than “bad guys”, there are two further startling things about these statistics. First the sheer disproportion of casualties  – the wealthy side with the technology does not need to lose a single soldier. Second, the fact that the soldiers who killed those 701 people did so via proxy and thus were subject to psychological distancing and do not have to carry the weight of those deaths.

It doesn’t feel pity, or remorse, or fear

Terminator“It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead” Kyle Reese tells Sarah Conner  in The Terminator.

The same can be said for the war machines that are being developed today. Robots are unable to differentiate between guilty or innocent, soldier or civilian. A robot does not possess such thing as “common sense” or a conscience. The reason that so many are used in medicine and industry are for their tireless mechanical precision.

Films like Terminator and The Matrix spin the ominous idea of robots overthrowing the Human race. However, if we are the ones putting weapons into their hands and teaching them to kill, perhaps we should be more concerned, as Sharkey is, that robots are already overthrowing our humanity.

Related links:

More about Artificial Intelligence

BBC reports on Sharkey’s warnings

Advertisements

~ by tallulahlucy on August 5, 2009.

5 Responses to “Killer Robots!”

  1. We still have to program them to do all of these things. Without people robots cannot function.
    If a robot or machine does malfunction and injure or kill then there is a fault with programming which was done by a human.
    If it has a command to injure or kill then we have to give it permission to do that. If it does it anyway then it malfunctioned and precautions and safety measures were not put in place by the humans programming it.
    If the command is secretly programmed into the system then, again, a human was responsible.
    Machines will never have brains like humans do. Yes, our brains are just electrical currents and impulses and seem like machinery, but they are still completely different to the hardware in robots.
    We program ourselves. If robots are created to program themselves they can only program the options we give them. They cannot go outside of their instructions because they don’t know that anything is beyond such limits; they won’t even be aware of the limits because they are not truly conscious beings.

    Robots can improve our world in so many ways and we should use them rather than fear the fictional ideas of robots destroying us.
    However, we still need to be responsible when creating them and use them for peace, not war. Ultimately it will be our own fault of robots do something horrible.

    • Thank you for your comment.

      Yes it is true that humans are responsible for programming robots but remember that no robot this advanced is ever programmed by one man (or woman) but by teams and when it comes to military robots the work is so classified that no one person actually working on a program knows what is going on with the end result, which means robots can be developed by governments to do anything.

      Another thing to take into account is that robots are now being programmed in order to learn.

      I think you may find this article interesting.

  2. Excellent read and a timely post given the recent closed door International Joint Conference for Artificial Intelligence, California. It also links into a change in mindset on how wars of the future could be conducted. For example, the US army has recently began developing video games to encourage gamers to be all they can be and actively recruit gamers into the military.

    • Thank you. Yes, Noel Sharkey was a speaker at that conference. That is very interesting about video games, I think I will have to take a look at that in a future blog!

  3. Interesting post .
    What I want to say is don’t forget that all that famous laws are uncomputable . It is impossible to know the consequence of an action.
    The only way to know the consequence of an action of a specific agent ( the environment for example ) is to have an incredible superior complexity ( it is directly connected to the halting program argument ) .
    So we can choose to construct stupid well predictable robots or more usefull powerfull unpredictable robots (directly connected to the universality ) .
    For example if you define the law “The robot can not kill human or kill less humans possible “.
    Now if you see the robot kill a human is this violating its law? Or is this the best solution to available choices ? Without killing that person how many people can die? It is impossible to know exactly the consequence of an action .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: