A new research centre opened its doors Wednesday at Britain's Cambridge University to look at the implications—good and bad—of artificial intelligence.
The centre will delve into AI applications ranging from increasingly "smart" smartphones to robot surgeons and "Terminator" style military droids.
Professor Stephen Hawking, who was due to speak at the centre's launch later on Wednesday, said: "The rise of powerful AI will be either the best or the worst thing ever to happen to humanity.
"We do not yet know which. The research done by this centre will be crucial to the future of our civilisation and of our species," he said.
Making sure AI is used to benefit humanity is the express aim of the Leverhulme Centre for the Future of Intelligence (CFI), funded by a £10 million (11.2 million-euro, $12.3-million) grant from the Leverhulme Trust.
A collaboration between the universities of Oxford, Cambridge, Imperial College London, and Berkeley, California, the CFI will see researchers from multiple disciplines work with industry representatives and policymakers on projects ranging from regulation of autonomous weapons to the implications of AI for democracy.
"AI is hugely exciting. Its practical applications can help us to tackle important social problems, as well as easing many tasks in everyday life," said Margaret Boden, a professor of cognitive sciences and consultant to the CFI.
The technology has led to major advances in "the sciences of mind and life", she said, but, misused, also "presents grave dangers".
"CFI aims to pre-empt these dangers, by guiding AI-development in human-friendly ways," she added.
Fears of robots freeing themselves from their creators have inspired a host of films and literature—"2001: a Space Odyssey" to name but one.
But these catastrophic scenarios aside, the development of AI, which allows robots to execute almost all human tasks, directly threatens millions of jobs.
Freedom or destruction?
So will AI, which has already conquered man in the games of chess and Go, ultimately leave humans on the sidelines?
"We don't need to see AI as replacing us, but can see it as enhancing us: we will be able to make better decisions, on the basis of better evidence and better insights," said Stephen Cave, director of the centre.
"AI will help us to learn about ourselves and our environment—and could, if managed well, be liberating."
With this in mind, ethics will be one of the key fields of research of the CFI.
"It's about how to ensure intelligent artificial systems have goals aligned with human values" and ensure computers don't evolve spontaneously in "new, unwelcome directions", Cave said.
"Before we delegate decisions in important areas, we need to be very sure that the intelligence systems to which we are delegating are sufficiently trustworthy."
The opening of the research centre comes at a time when major international groups have competing ambitions in AI.
Google has integrated the technology in its new phone, Apple and Microsoft are proposing personal assistants, while Sony and Volkswagen have also invested in AI development.
Explore further: Cambridge University launches new centre to study AI and the future of intelligence