AI War Machines March On

Predator UAV sketchAs most of us know, and shouldn’t come as a surprise to those who don’t, the military has a fervent eye on applications of AI in wars. While most modern military robots are remote controlled by humans, there are around 4,000 robots currently in Iraq, and governments don’t intend to stop at that. The goal is full autonomy, and we’re getting there. A recent article on increased autonomy of war machines mentions a few real life examples of current semi-autonomy, such as the use of the Predator unmanned aerial vehicle, and DARPA’s intentions to explore systems that make life-and-death decisions on their own.

When a semi-autonomous MQ-1 Predator self-navigated above a car full of al-Qaida suspects in 2002, the decision to vaporise them with Hellfire missiles was made by pilots 7,000 miles away. Predators and the more deadly Reaper robot attack planes have flown many missions since then with inevitable civilian deaths, yet working with remote-controlled or semi-autonomous machines carries only the same ethical responsibilities as a traditional air strike.

DARPA Yellow Sticky Todo-listThe author of the article, professor of artificial intelligence and robotics, Noel Sharkey, continues to explain that there are no guidelines or laws in place that cover fully autonomous AI systems. Yet systems that can make their own decision on whether to pull the trigger are high on DARPA’s To-Do list.

In attempting to allay political opposition, the US army is funding a project to equip robot soldiers with a conscience to give them the ability to make ethical decisions. But machines could not discriminate reliably between buses carrying enemy soldiers or schoolchildren, let alone be ethical. It smells like a move to delegate the responsibility for fatal errors on to non-sentient weapons.

I haven’t heard of this project. I’m a bit intrigued to know how they intend to approach the matter. Sharkey concludes the article with a plea that we create an international legislation on autonomous war machines. I’ve not been entirely convinced in the past whether AI in warfare will result in more human casualties, or whether we’ll be pitching robots against robots — thereby decreasing human loss. But if there are explicit plans to enable human targeting, it puts a significant dent in hopes for the latter.

Artificial intelligence is advancing faster than ever and armed forces are increasingly investing in robotics projects. South Korea is, for example, deploying armed robot border guards. You might’ve seen it, the SGR-A1 sentry was developed by Samsung. Presented in the video below with theme music:

On recent advancements, DARPA gave a press release this month on the successful completion of its Autonomous Airborne Refueling Demonstration (AARD), where AI agents managed greater accuracy than human pilots.

Not only did the robo-flyboy manage to hook up with the trailing fuel point flapping up and down in turbulence by up to five feet – apparently the limit for most human stick-jockeys – it could also plug in while the tanker was turning.

“Although pilots routinely follow a tanker through turns while connected, they typically do not attempt to make contact in a turn,” says DARPA. The software improved significantly during the trials, according to NASA test pilot Dick Ewers. Last year it flew “like a second lieutenant”, he said. But the robot rookie was upgraded, and now it’s “better than a skilled pilot”. If it was human, it would now retire and go to work for the airlines, and the military would have to start again with a another second lieutenant; but the robot will stay this good forever, or improve.

I’m sure we can expect to see some incredible advancements for ground vehicles as well at the next DARPA Urban Challenge. Smart observervations on smart machines, both by Sharkey and Ewers, worth our attention.

Links & References

No related posts detected.

One Comment, Comment or Ping

  1. Dead Ringer

    Great article Hrafn. One small but important comment: I would not put “advance science” in second place on the DARPA to-do list: that is absolutely not on their list at all (!) but simply a re-wording of their main objective, which is to advance the state of WARFARE. (If advancing technology helps them on that path — so be it. But that is simply a means to an end. And that end completely and absolutely defines DARPA, namely, that of producing killing machines and people to operate them. Disguising their main objective as “advancing science” helps keep many placated; and producing numerous side effects (notably the Internet) as “evidence” that they are “doing good for the people” is a nasty little trick that — unforunately and sadly — way too many people fall for. But those people, when it comes down to it, are really being duped.

Reply to “AI War Machines March On”

Please read the Terms of Use before commenting!

Basic HTML allowed (a, blockquote, strong, em)


Other ..

Think Artificial is a proud member of the 9rules blog community.