As most of us know, and shouldn’t come as a surprise to those who don’t, the military has a fervent eye on applications of AI in wars. While most modern military robots are remote controlled by humans, there are around 4,000 robots currently in Iraq, and governments don’t intend to stop at that. The goal is full autonomy, and we’re getting there. A recent article on increased autonomy of war machines mentions a few real life examples of current semi-autonomy, such as the use of the Predator unmanned aerial vehicle, and DARPA’s intentions to explore systems that make life-and-death decisions on their own.
When a semi-autonomous MQ-1 Predator self-navigated above a car full of al-Qaida suspects in 2002, the decision to vaporise them with Hellfire missiles was made by pilots 7,000 miles away. Predators and the more deadly Reaper robot attack planes have flown many missions since then with inevitable civilian deaths, yet working with remote-controlled or semi-autonomous machines carries only the same ethical responsibilities as a traditional air strike.
The author of the article, professor of artificial intelligence and robotics, Noel Sharkey, continues to explain that there are no guidelines or laws in place that cover fully autonomous AI systems. Yet systems that can make their own decision on whether to pull the trigger are high on DARPA’s To-Do list.
In attempting to allay political opposition, the US army is funding a project to equip robot soldiers with a conscience to give them the ability to make ethical decisions. But machines could not discriminate reliably between buses carrying enemy soldiers or schoolchildren, let alone be ethical. It smells like a move to delegate the responsibility for fatal errors on to non-sentient weapons.
I haven’t heard of this project. I’m a bit intrigued to know how they intend to approach the matter. Sharkey concludes the article with a plea that we create an international legislation on autonomous war machines. I’ve not been entirely convinced in the past whether AI in warfare will result in more human casualties, or whether we’ll be pitching robots against robots — thereby decreasing human loss. But if there are explicit plans to enable human targeting, it puts a significant dent in hopes for the latter.
Artificial intelligence is advancing faster than ever and armed forces are increasingly investing in robotics projects. South Korea is, for example, deploying armed robot border guards. You might’ve seen it, the SGR-A1 sentry was developed by Samsung. Presented in the video below with theme music:
On recent advancements, DARPA gave a press release this month on the successful completion of its Autonomous Airborne Refueling Demonstration (AARD), where AI agents managed greater accuracy than human pilots.
Not only did the robo-flyboy manage to hook up with the trailing fuel point flapping up and down in turbulence by up to five feet – apparently the limit for most human stick-jockeys – it could also plug in while the tanker was turning.
“Although pilots routinely follow a tanker through turns while connected, they typically do not attempt to make contact in a turn,” says DARPA. The software improved significantly during the trials, according to NASA test pilot Dick Ewers. Last year it flew “like a second lieutenant”, he said. But the robot rookie was upgraded, and now it’s “better than a skilled pilot”. If it was human, it would now retire and go to work for the airlines, and the military would have to start again with a another second lieutenant; but the robot will stay this good forever, or improve.
I’m sure we can expect to see some incredible advancements for ground vehicles as well at the next DARPA Urban Challenge. Smart observervations on smart machines, both by Sharkey and Ewers, worth our attention.
Links & References
- Sharkey’s Article in the Guardian
- MQ-1 Predator on Wikipedia
- Droid pilots beat humans at air-to-air refuelling in The Register