Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?
Yes, says Markus Ehrenmann of Swisscom.
The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.
For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.
In many states in the USA, programs help to decide the length of prison sentences given to criminals. This enables the state to lower the recidivism rate and prison costs – but only on average. In individual cases, the judgements passed by the decision-making algorithms can be disastrously wrong – such as when skin colour or place of residence are used as input variables.
Searching for the concepts 'professional hairstyle' and 'unprofessional hairstyle' in the US version of Google will bring up images of light-skinned women and dark-skinned women respectively (in accordance with the 'algorithmic bias'). The data pool that the algorithms use to make their decisions is not always correct. Even if the algorithms use a large number of texts as a basis for their decisions, cultural factors still cannot be eliminated. Stereotypes discriminate. Furthermore, data always refers to the past, and thus only allows for limited assertions about the future.
People have a right to an explanation about the decisions that affect them. And they have a right not to be discriminated against. This is why we have to be in a position to comprehend the decision-making processes of algorithms and, where necessary, to correct them. The same also applies to the ranking mechanisms of the big social networks. What's dangerous about them is not their biased selection of media reports, but the fact that their system's mode of operation remains hidden from us. Public and private organisations are already working on solutions for the 'debiasing' of algorithms and on models to monitor them. Even though the big advantages of innovation in artificial intelligence mustn't be stifled, our rights still have to be protected. The EU Data Privacy Act, which will come into force in 2018, offers a sensible, proportionate form of regulation.
No, says Mouloud Dey of SAS.
We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.
It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.
We may criticise Google's management of our data, but it would have been a huge shame if the company had folded 20 years ago because of unresolved privacy and data protection issues. New legislation may not even be required. Take, for example, Pokemon Go: the law already prohibits me from endangering other people's lives by playing it.
There are also obstacles to introducing a regulator: the complexity of the mandate, the burden on innovation and the behind-the-times nature of its work, which results from the excessive speed of technological progress. Users must also play their part. I may work in the digital sector, but I'm not on Facebook, as I don't see its utility. You will, however, find me on LinkedIn, despite its algorithms not differing fundamentally.
Citizens should know how algorithms affect them. But let's be frank: the average mortal is not capable of verifying one. In the end, others must be trusted to do so for us. In this market particularly, self-regulation can succeed, given the proximity of clients to companies and the enormous pressure they wield upon them. It's a company's responsibility to explain very clearly how a system works. Once again, problems arise from the use of a program, not its mere existence. Mouloud Dey is the director of Innovation and Business Solutions at SAS France and a member of the Scientific Council of the Data ScienceTech Institute at the Nice Sophia Antipolis University.
Explore further: Here's how we can protect ourselves from the hidden algorithms that influence our lives