Rogue One highlights an uncomfortable fact – military robots can change sides

Rogue One highlights an uncomfortable fact – military robots can change sides
The droid K-2SO from Rogue One: A Star Wars Story. Credit: Lucasfilm Ltd

The latest Star Wars movie, Rogue One introduces us to a new droid K-2SO that is the robotic lead of the story.

Without giving away too many spoilers, K-2SO is part of the Rebellion freedom fighter group that are tasked with stealing the plans to the first Death Star, the infamous moon-sized battle station from the original Star Wars movie.

The significance of K-2SO is his back-story. K-2SO is an autonomous military robot that used to fight for the Rebellion's enemy – the Imperial Empire. He was captured and reprogrammed by the rebels and is now a core member of Rogue One group.

K-2SO is not the first robot to swap sides in a movie. Remember the Terminator's initial mission was to kill Sarah Connor in the first movie, before being reprogrammed in later movies to protect her and her son John Connor.

This does then raise the question of whether in real life a programmed military machine could be encouraged, reprogrammed or hacked to defect.

Soldiers swapping sides

The idea of human soldiers swapping sides during wars and conflicts is nothing new. There are numerous examples of soldiers surrendering and then announcing that they have information and would like to help and sometimes fight for their captors.

It is the information about battle plans and tactics that these defecting soldiers have that could potentially change the course of a battle or a military campaign.

One of the most famous defectors was General Benedict Arnold. Arnold was a general of the American Army during the American War of Independence, but he defected to the British Army and became a brigadier general. He led British forces against the Americans and retired to London after the war.

The video will load shortly

Weapons technology

The industrial revolution and the rise of mechanical weapons such as tanks, aircraft and submarines in the early 20th century changed the nature of defecting.

It was the development of more and more advanced weapons that gave a nation its advantage over its military rivals. Stealing an enemy's new weapons was almost impossible and so it was up to defectors to deliver the plans of the new weapons or sometimes, examples of the actual weapons to the other side.


Martin Monti, of the United States Army Air Corps, defected to Italy during 1944 and handed over a photographic reconnaissance version of the P-38 Lightning aircraft to the Nazi military. He then joined the Nazi SS.

In 1976, during the Cold War, Viktor Belenko, flew his highly-secret MiG-25 jet fighter from the USSR to Japan.

NATO had long wanted to get the technical details of this aircraft as it was rumoured to be able to fly three times faster than the speed of sound.

Japan gave the US access to the MiG and Belenko was eventually granted citizenship of the US. The plane was stripped and analysed by the Americans who also had a copy of the aircraft's technical manual that Belenko had also brought with him.

Defectors not necessary

In the 21st century we have seen the development of remotely controlled systems for reconnaissance, surveillance and the delivery of weapons to targets. Such systems are likely to be very important in the future of defence capabilities.

Rogue One highlights an uncomfortable fact – military robots can change sides
(Left to right) Cassian Andor (Diego Luna), Jyn Erso (Felicity Jones) and K-2SO (Alan Tudyk) in Rogue One: A Star Wars Story. Credit: Lucasfilm/Jonathan Olley

As this equipment does not require a person on-board, it means that human defectors or spies are no longer required to deliver this robotic hardware to the opposition.

It is impossible to know for sure when the first unmanned system was successfully captured. But because these systems rely on external radio commands and infrastructure, such as GPS, it is plausible that they can be taken over and captured and it has almost certainly already happened.

In 2011, a US Air Force drone came down in Iran and was recovered by the Iranian state. That aircraft was a highly secretive RQ-170 stealth drone and the Iranians claimed that they had "spoofed" the drone into landing in Iran by creating fake GPS signals.

Experts in the US doubted those claims, but however the drone was captured, Iran ended up with a nearly intact state-of-the-art stealth drone.

They put it on display to international media and stated that they would reverse engineer it and create their own version of this high-tech robotic surveillance aircraft. Iran now appears to have a squadron of these stealth drones, all based on the original captured aircraft.

Trusting autonomous robots

An obvious way to prevent the claimed GPS-spoofing or other similar hacks is to create systems that are truly autonomous and do not require or use external communication systems.

Such robots should be immune to hacking once deployed on their missions. But the development and use of truly autonomous robot weapon systems is a controversial topic.

The Campaign to Stop Killer Robots was launched in 2013 to both educate the general public about the possible dangers of autonomous killer robots and to try and influence the highest-level decision makers in governments and at the United Nations that such robots should be banned.

The video will load shortly
The International Committee of the Red Cross video about the rules of war and future autonomous military robots.

The principal of the campaign is that a human should make the final decision before a weapon is launched at its intended target.

The International Committee of the Red Cross has pointed out that the so-called "rules of war" must be coded into autonomous military robots of the future.

Some robotics engineers and researchers are working on exactly this and have started to develop the algorithms that will enable autonomous military robots to be ethical. They propose that robots may be able to be protect civilians better than human soldiers.

But all of this assumes that the human creators of the robots are acting ethically and want the robots to also be ethical.

What happens if a future autonomous soldier robot is tasked with doing something that it decides goes against its code of ethics? Will it just say "no", or will it conclude that the most appropriate action is to turn on its owner? Would it defect to the other side? How would loyalty be built into an autonomous robot and how would the robot's creator ensure that it could be trusted to not switch sides?

In the coming years you are likely to read dozens of stories about research into trusted autonomy. It is a hot research topic and a critically important one as the world begins to outsource its fighting to robots.

Rogue One: A Star Wars Story may be set a long time ago in a galaxy far, far away, but its plot lines are actually based in our reality.

Dealing with states that build frightening new weapons, stealing plans to those weapons and then fighting back with robots is not science fiction. And it may be that soon we see those fighting robots turn on their creators

Explore further: Researchers question if banning of 'killer robots' actually will stop robots from killing