The way it is now, we have almost no way to know how our data are being shared and used. Credit: Shutterstock Most of us know tech platforms such as Facebook and Google track, store and make money from our data. But there are constantly new revelations about just how much of our privacy has been chipped away.
The latest comes from the Wall Street Journal, which dropped a bombshell on Friday when its testing revealed many popular smartphone apps have been sending personal data to Facebook. That reportedly includes data from heart rate monitoring and period tracking apps: "Flo Health Inc.'s Flo Period & Ovulation Tracker, which claims 25 million active users, told Facebook when a user was having her period or informed the app of an intention to get pregnant, the tests showed."
When we use technologies that track our data, we enter a system governed by algorithms. And the more information we hand over, the more we become entwined with algorithmic systems we don't control.
We urgently need protections that look after our personal interests in this system. We propose the concept of "algorithmic guardians" as an effective solution.
How do data tracking algorithms work?
Daily, without our knowledge, technology companies use our data to predict our habits, preferences and behaviour. Algorithms that operate behind everything from music recommendation systems to facial recognition home security systems use that data to create a digital twin version of us.
We are then served content and advertising based on what the algorithm has decided we want and need, without explaining how it came to that decision, or allowing us any input into the decision making process.
And our interests likely come second to those who developed the algorithm.
Contrary to what the concept suggests, we don't directly control "personalisation", and we have almost no way to protect our autonomy in these transactions of data and decision making.
What is an 'algorithmic guardian'?
We have proposed the concept of algorithmic guardians, which could be programmed to manage our digital interactions with social platforms and apps according to our personal preferences.
They are envisaged as bots, personal assistants or hologram technology that accompany us everywhere we venture online, and alert us to what's going on behind the scenes.
These guardians are themselves algorithms, but they work for us alone. Like computer virus software, those that fail to protect users will go out of business, while those that gain a reputation as trusted guardians will succeed.
In practical terms, our guardians would make us recognisable or anonymous when we choose to be. They would also alter our digital identity according to our wishes, so that we could use different services with different sets of personal preferences. Our guardians keep our personal data in our own hands by making sure our backups and passwords are safe. We would decide what is remembered and what is forgotten.
An algorithmic guardian would:
- alert us if our location, online activity or conversations were being monitored or tracked, and give us the option to disappear
- help us understand the relevant points of long and cumbersome terms and conditions when we sign up to an online service
- give us a simple explanation when we don't understand what's happening to our data between our computer, phone records and the dozens of apps running in the background on our phones
- notify us if an app is sending data from our phones to third parties, and give us the option to block it in real time
- tell us if our data has been monetised by a third party and what it was for.
We envision algorithmic guardians as the next generation in current personal assistants such Siri, Alexa or Watson. Thanks to wearable technology and advanced human-computer interactions models, they will be constantly and easily accessible.
Our digital guardians need not be intelligent in the same way as humans. Rather they need to be smart in relation to the environment they inhabit – by recognising and understanding the other algorithms they encounter.
In any case, even if algorithmic guardians (unlike third party algorithms) are user-owned and are totally under our own control, being able to understand how they work will be a priority to make them fully trustworthy.
When will algorithmic guardians arrive?
The technology to enable algorithmic guardians is emerging as we speak. What's lagging is the widespread realisation that we need it.
You can see primitive versions of algorithmic guardians technology in digital vaults for storing and managing passwords, and in software settings that give us some control over how our data is used.
Explainable machine learning is a hot topic right now, but still very much in the research domain. It addresses the "black box" problem, where we have no insight into how an algorithm actually arrived at its final decision. In practice, we may know that our loan application is denied, but we don't know if it was because of our history of unpaid power bills, or because of our surname.
Without this accountability, key moments of our lives are mediated by unknown, unseen, and arbitrary algorithms. Algorithmic guardians could take on the role of communicating and explaining these decisions.
Now that algorithms have become pervasive in daily life, explainability is no longer a choice, but an area urgently requiring further attention.
We need to develop specific algorithmic guardian models in the next couple of years to lay the foundations for open algorithmic systems over the coming decade. That way, if an app wants to tell Facebook you're pregnant, you'll know about it before it happens.
Explore further: Online shopping algorithms are colluding to keep prices high