Robowar: The Next Generation Of Warfare Revealed – A General’s Dream, But Are They Also Humanity’s Nightmare?


The armed forces of the West are close to perfecting a new generation of killing machines: autonomous robots that know neither pity nor fear

Robowar: The next generation of warfare revealed – a general’s dream, but are they also humanity’s nightmare? (Independent, Nov 15, 2013):

Rather like a dog with a rubber bone, the Crusher likes to toy with its prey. After first sizing it up, it leaps, rolling and gripping its target until it is sufficiently chewed up and “dead”. Unlike a dog, the Crusher is capable of performing this feat on a line of parked cars.

More disturbingly, this six-tonne, six-wheeled monster developed for America’s Department of Defence and otherwise known as the Unmanned Ground Combat Vehicle, or UGV, can pounce without the intervention or say-so of a human operator. It is an ability which, in theory, can stretch to firing the machine-gun mounted on its roof.

The UGV is a forerunner of what many in the defence world believe is the next quantum leap in warfare – a generation of fully autonomous weapons which would be capable of crossing one of the great Rubicons of modern conflict by “deciding” for themselves when to take human life. In the words of one US general, they are the harbingers of an age where “death by algorithm is the ultimate indignity”.

But as a result of a decision taken in a Geneva conference room today, Crusher and its cousins – dubbed “killer robots” – could now be equally on the road to extinction after a vote by the United Nations Convention on Conventional Weapons (CWC) which would pave the way to a global ban on autonomous weapons.

The unanimous vote means a multi-nation assessment of the technology will now begin, with the aim of yielding a “pre-emptive” prohibition before prototypes such as the Crusher become fully authorised weapons rolling off production lines from Texas to Beijing.

The decision by the CWC was greeted with relief by the Campaign to Stop Killer Robots, a coalition of human rights groups and campaigners who argue that there is only a narrow window before the world’s competing powers are sucked into an arms race to produce machines capable, quite literally, of outgunning each other.

Mary Wareham, of Human Rights Watch, said: “This is small but important step on a ladder which we are saying to governments we want them to climb and create a treaty.”

Robotic military systems with varying degrees of lethality are under development – and in some cases already deployed – by the US, South Korea, China, Israel and the UK, as defence budgets around the world respond to the forces of austerity which demand greater capability for less money.

As one might expect with its vast defence budget, the US leads the sector and has advanced programmes to develop not only land-based robots like the Crusher but also the next generation of airborne drones such as the X47-B, a futuristic bat-shaped aircraft with far greater abilities to “fly” itself than the Reapers and Predators used to pick off terrorist leaders in Yemen and Pakistan.

But other countries have already gone further in finding practical uses for robotic weaponry. South Korea and Israel have deployed armed sentries on their disputed borders with North Korea and the Palestinian territories, respectively.

Both systems – arrays of sensors, loudspeakers and guns capable of delivering a lethal shot over two miles – have a mode to automatically fire on an intruder, although each country insists the option to attack remains for now under direct human control.

Professor Noel Sharkey, the eminent roboticist at Sheffield University and co-founder of the International Committee on Robot Arms Control, told The Independent: “There was once a time when the world recognised the dangers and immorality of the aerial bombardment of cities. Shortly after that, the Second World War broke out and we all know what that resulted in.

“We must not allow the same tit-for-tat process to start with robotic weaponry. There is an absolute red line here which is that a machine must never be delegated the decision to kill a human.”

He added: “There are such machines out there, but they are very far from being able to correctly discern the point at which to apply lethal force. We have been working on artificial intelligence since the 1950s but the difficulties are immense. A machine might be able to tell the difference between a ship and a tank, but it may well struggle to tell the difference between a tank and a civilian lorry with a plank of wood sticking out of it.”

Professor Sharkey, who said the oft-cited Hollywood example of killer robots in the Terminator film series was “unhelpful” in explaining the reality of the technology, said  thought also had to be given to the risk of the weaponry falling into the hands of totalitarian regimes or terrorists.

Proponents of the technology argue that, if properly fettered by software so advanced that it could tell the difference, for example, between a large child with a toy gun and a small adult with an AK-47, it could have a role in future wars. A robot cannot rape, nor can it be motivated by cruelty or vengeance, and it can crunch data to avoid civilian casualties at a rate no human could compete with – or so the argument goes.

But opponents say the delivery of death by a machine violates the first law of robotics as laid out in 1942 by the science fiction writer Isaac Asimov – that a robot’s primary duty is to protect humans – and even within the military, there  are concerns that such scenarios cross a fundamental boundary.

A former US Air Force general made an impassioned plea earlier this year for action on a treaty to ban the killer machines. Major General Robert Latiff wrote: “Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object. When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster – which at least can be used for spare parts.”

Governments have not been deaf to such qualms. Britain’s Ministry of Defence, which is developing a “super-drone”, has acknowledged that autonomous weapons meeting legal requirements are theoretically possible, but says the development of such systems would be expensive and difficult.

The US Defence Department issued a directive last year requiring that the decision to deploy lethal force must always remain with a human. But Human Rights Watch  warned: “The policy of self-restraint [the directive] embraces may also be hard to sustain if other nations begin to deploy fully autonomous weapons systems.”

Killing machines: Next generation of warfare

Crusher

Developed for the research arm of the Pentagon, the Crusher is an autonomous robotic armoured vehicle capable of picking its way across a battlefield using an array of sensors and crushing parked civilian cars. During tests in Texas it was fitted with a machine gun, but the vehicle remains a prototype.

X47-B

Billed as the answer to American generals’ dreams of a generation of “super-drones” capable of being launched from aircraft carriers, this bat-like jet is far more autonomous than the current crop of pilotless vehicles being used in Afghanistan and Pakistan. Although prototypes are unarmed, it is capable of carrying weaponry.

Taranis

Named after the Celtic god of thunder, Taranis is the British answer to the X47-B and presages an age when combat aircraft will be pilotless. Taranis, which is built by BAE Systems, made its maiden flight in Australia last month and has been described as being capable of “full autonomy”.

“Invisible Sword” and SKAT

Little is known about Chinese and Russian research into “killer robots” other than that it is going on. Both countries have unveiled pilotless military jets similar to those being developed by Britain and the US. The Chinese prototype – called Anjian or “Invisible Sword” – is considered to be an air-to-air combat plane.

SGR-1

Developed by South Korea to watch the border with North Korea, this fixed robot sentry is capable of shooting without human command. Its sensors can detect a human from as far away as two miles and it can fire a machine gun or a grenade launcher. Israel has deployed similar technology, but both countries insist the robots will only fire after being given human orders.

Guardium

An Israeli robotic armed vehicle, it has been developed to patrol borders and sensitive sites such as airports. A promotional video describes a scenario where the vehicle automatically transmits co-ordinates for a missile strike to a pilotless drone.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.