> our title:

Coming Soon to the Battlefield: Robots That Can Kill

> original title:

Coming Soon to the Battlefield: Robots That Can Kill

(Source: The Center for Public Integrity; issued Sept 03, 2019)

By Zachary Fryer-Biggs


Tomorrow’s wars will be faster, more high-tech, and less human than ever before. Welcome to a new era of machine-driven warfare.

Wallops Island — a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses — is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.

If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach.

The secretive effort — part of a Marine Corps program called Sea Mob — was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.

Lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.

So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so. Many officers — schooled for years in the importance of controlling the battlefield — remain deeply skeptical about handing such authority to a robot. Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.

But if the drawbacks of using artificially intelligent war machines are obvious, so are the advantages. Humans generally take about a quarter of a second to react to something we see — think of a batter deciding whether to swing at a baseball pitch. But now machines we’ve created have surpassed us, at least in processing speed. Earlier this year, for example, researchers at Nanyang Technological University, in Singapore, focused a computer network on a data set of 1.2 million images; the computer then tried to identify all the pictured objects in just 90 seconds, or 0.000075 seconds an image.

The outcome wasn’t perfect, or even close: At that incredible speed, the system identified objects correctly only 58 percent of the time, a rate that would be catastrophic on a battlefield. Nevertheless, the fact that machines can act, and react, much more quickly than we can is becoming more relevant as the pace of war speeds up. In the next decade, missiles will fly near the Earth at several miles per second, too fast for humans to make crucial defensive decisions on their own. Drones will attack in self-directed swarms, and specialized computers will assault one another at the speed of light. Humans might create the weapons and give them initial instructions, but after that, many military officials predict, they’ll only be in the way. (end of excerpt)


Click here for the full story, on the Public Integrity website.

-ends-