Credit: Spainter_vfx/Shutterstock.com Movies such as 2001: A Space Odyssey, Blade Runner and The Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don't seem so far removed from reality.
Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.
Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?
AI and human rights
First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I'm not sure that the focus of our concern for human rights should really lie with rogue robots, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.
This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems in charge of life and death decisions, with limited or no human control.
AI also revolutionises the link between warfare and surveillance practices. Groups such as the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google's participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC appealed to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company's involvement in the project. Google recently announced that it will not be renewing its contract.
In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding Cambridge Analytica's harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.
Meanwhile, critical data analysts challenge discriminatory practices associated with what they call AI's "white guy problem". This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.
Ambiguous bots
The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI?
There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to write a Magna Carta for the digital age and the Future of Life Institute's Asilomar AI Principles, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.
These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.
Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.
But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.
And herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What's more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.
Explore further: Google's new principles on AI need to be better at protecting human rights