The BBC article Killer robots: Experts warn of ‘third revolution in warfare’ is tragic.
More than 100 leading robotics experts are urging the United Nations to take action in order to prevent the development of “killer robots”.
They don’t realize that that bridge was crossed over long ago. Automated weapons systems that can make the call as to fire or not are already in the field. Anyone can buy the systems of-the-shelf right now. Look into the Kalashnikov systems.
The 116 experts are calling for a ban on the use of AI in managing weaponry.
The scientists are about ten years too late.
The article says that the technology does not already exist. The article is completely wrong. I’ve played with the open source AI systems, the best for me is TensorFlow. There are others, but it is the one that has balanced capability with accessibility in a way that I may be able to work with. Believe me, an idiot can construct a basic AI with an attached weapons system.
The UN can’t stop it. The BBC can’t stop it. The movie The Terminator put it best:
It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop… ever, until you are dead!
Why automate? Obviously military thinkers are not trying to screw the pooch so there are reasons for automation;
- AI controlled weapons systems will not sleep.
- An automated weapons system that has no human crew reduces exposure of our troops.
- Automated systems have no medical plans.
- Automated systems are very much a fixed cost.
- Automated systems are THEORETICALLY more consistent than living troops.
- Automated systems will carry out their function in any environment.
- They can react with speed that humans can not. Who would not want our troops protected by flash deployable AI weapons systems?
The cons come down to a few basic points;
- AI can be hacked. Your “troops” have just changed sides…
- AI can not be reasoned with. Any event outside of their programming could be lethal.
- AI is not humanity. Do we want machines that decide on the kills shots?
- If you don’t trust your leaders, and you really should not ever really trust your leaders, can you trust the AI that they control? Accountablity is such a slippery concept when one can say “the robot did it!”
- Understand that AI sees casualties and munitions as numbers. They have no soul, they don’t have a concept of good and evil.
I submit a scenario that illustrates these concepts is as follows;
- Bad guys line up a bunch of prisoners and child hostages.
- They march them towards the border post that has a weaponized AI package.
- THE AI PACKAGE WILL FIRE because THE AI PACKAGE MUST FIRE.
Think about it, an AI that wouldn’t fire in those circumstances would be useless. Some would say, yes, but a central command could override the AI.
First issue: What is communications is cut off? If communications are cut off, do you really want a weapons system that just gives everyone a free pass? I don’t think so! Thus, if communications are cut off, get the body bags and a bunch of sponges ready…
Second Issue: Lets say communications are not cut off. If the central command will override the AI, why have the weapons system in the first place? Simple logic, if you have a weapons system defending an area, it must fire or it has no reason to be there. If all it takes is a few hostages to get past the weapon system, your AI just became more useless than a Timex Sinclair 1000 without a power brick…
I read the following somewhere:
“An AI that can’t pull the trigger is useless. An AI that can pull the trigger can not be trusted.”
Summary: The systems exist and there is no stopping the technology. The U.N. is, was, and will always be a joke and are not really an issue in any case. Understand that the world has changed while everyone slept…
Let freedom ring, baby!