Ethics Committee established to help translate responsible AI theory into practice

  

The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, as concluded by the recent House of Lords AI Committee report. However, at present, there is a gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’. There is demand from all sizes of organisation for help on defining and applying ethical standards in practice.

The Machine Intelligence Garage Ethics Committee, chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics Lab Director at University of Oxford, will convene some of the foremost minds in AI and data ethics to address this need. It will comprise two elements: the Steering Group, who will oversee the development of principles and tools to facilitate responsible AI in practice, and the Working Group, who will work closely with start-ups developing their propositions through Digital Catapult’s Machine Intelligence Garage programme.

While the programme itself is designed to offer access to expertise and computational power, the Working Group’s collaboration with Machine Intelligence Garage start-ups looks to ensure that the Committee’s work is tested and grounded in practice.

Dr. Jeremy Silver, CEO, Digital Catapult comments, “The role of the Machine Intelligence Garage Ethics Committee extends beyond regulation. This group of leading thinkers will be working hands-on with cohorts of AI companies to help ensure that the products and services they deliver have an ethical approach in their design and execution.

“A number of other organisations are also approaching these issues, notably the Ada Lovelace Institute and the Centre for Data Ethics and Innovation with whom we look forward to collaborating. The reason why this eminent group is so keen to work with Digital Catapult is because we are closer to the street and working with real companies developing real machine learning and AI applications in which ethical issues will become more readily apparent and addressable.”

Professor Floridiadds, “The development of AI is accelerating – and everyday we’re witnessing new proof of its huge potential. However, its development and applications have significant ethical implications, and we would be naïve not to deal with them. I’m honoured to be leading such a noteworthy group to deliver a set of principles and tools to guide the ethical development and use of AI moving forward.”

Now convened, the Machine Intelligence Garage Ethics Committee will begin work refining its guiding principles for responsible AI development. According to Digital Catapult, this will enable companies to evaluate their work for risks, benefits, compliance with data and privacy legislation, and social impact and inclusiveness, among other criteria. The first working principles will be delivered in September 2018.

The Committee has stated it would welcome feedback and opportunities for collaboration. Contact us at [email protected].

Steering Board Members:

  1. Luciano Floridi, (Chair), Professor of Philosophy and Ethics of Information
  2. William Blair, Professor of Financial Law and Ethics at Queen Mary, University of London
  3. Wendy Hall, Regius Professor of Computer Science at the University of Southampton
  4. Hayaatun Silem, CEO at Royal Academy of Engineering
  5. Jeni Tennison, CEO at Open Data Institute
  6. Jo Twist, Chief Executive Officer at UKIE - The Association for UK Interactive Entertainment
Working Group Members:

  1. Shahar Avin, Research Associate at Centre for the study of Existential Risk (CSER)
  2. Josh Cowls, Data Ethics Researcher at The Alan Turing Institute
  3. Christine Henry, Product Manager, Real World Insights at IQVIA
  4. Laura James, Entrepreneur in Residence at Cambridge Computer Lab
  5. Burkhard Schafer, Professor of computational legal theory at University of Edinburgh
  6. Hetan Shah, Executive Director at Royal Statistical Society