COVID-19 Contact Tracing App Series: Olivia Gambelin
19 Jun 2020
Founder, Ethical Intelligence and AI Ethicist
As we transition into the new normal, we must continuously ask ourselves what role we have permitted technology to take in the fight against the virus and whether or not this is a role that should exist in a post-pandemic world.
The ethical debate around Digital Contact Tracing (DCT) has focussed mainly on the concern of whether or not it would threaten individual privacy and if the usage of such an application should be mandated. However, this framing obscures important questions about the interface between health authorities and technology platforms as we must ask ourselves what irreversible doors into data surveillance are we opening in the name of public health?
DCT applications track and retain information about health and contacts during a time of heightened public awareness of technological privacy risks. This heightened awareness creates a privacy-first perspective which results in individuals asking how much data they are willing to share with entities such as government bodies. Historically, however, privacy has not been a significant constraint on manual contact tracing, as even strong legislation recognizes the legitimate need for public health authorities access to protected health information. This means that from the public health perspective, the health and contact information technology providers can access is a potential resource for public health authorities in fighting the virus.
Although it is a potential resource, there is significant doubt as to whether or not it is a legitimate one, as we have come to realise that DCT faces inherent efficacy and equity challenges. This, in turn, forces the consideration that if there is doubt surrounding the efficacy of DCT apps, what kind of door to health data are we opening to technology platforms without the guarantee of its usefulness to public health authorities?
Prior to the pandemic, any information an application could track regarding one individual coming into close contact with another would have been considered product data. In other words, it would have been just another dataset collected by a technology product that, when used in the right context, could lead to some insight or another into human behaviour. The important thing to note here though, is that this would have been a dataset society would immediately reject due to the implications for surveillance it carries.
Now, deep into the coronavirus crisis, we are singing a different tune. Suddenly, what was before seen as indefensible product data has been redefined as essential health data. Of course, unprecedented times lead to unprecedented measures, such as a contextual shift for a certain highly sensitive dataset. But with this we need to ensure that we are asking in whose hands does this data rest and on what terms are public health authorities granted access to it.
Let’s take the NHSX contact tracing application as an example. When the app was first introduced to the discussion, the NHSX made the decision to pursue a centralised system. This system went against Apple and Google’s decentralised approach, an approach the tech giants claimed to take for its privacy-preserving abilities. As soon as Apple and Google released their joint statement announcing the development of APIs and eventual operation system changes that would enable decentralised contact tracing applications, the NHSX centralised approach didn’t stand a chance in the eyes of the public. Even though the NHS itself is a trusted and credible source of expertise in health, it is not regarded as a leader in technology.
This is not an argument for or against centralised/decentralised systems, nor is it an argument for or against either the NHSX or Apple and Google’s respective approaches to contact tracing. The purpose of this example is to highlight the level of control over access to highly sensitive data and subsequent influence over public opinion tech giants have.
So, as we transition into the new normal, we must continuously ask ourselves what role we have permitted technology to take in the fight against the virus and whether or not this is a role that should exist in a post-pandemic world. This is the only way we can ensure that the unprecedented changes of the virus today do not become the toxic normal standard of tomorrow.
Click to read more expert contributions to the series.
Olivia Gabelin was born and bred in Silicon Valley, leading her to begin her career working in digital marketing for tech startups. Following undergraduate studies, Olivia moved overseas to pursue the opportunity to expand her international network as a GDPR and data privacy researcher in Brussels. This experience prompted her return to academia to obtain her MSc in Philosophy, concentration in AI Ethics, at the University of Edinburgh. During her time in Scotland, Olivia co-founded the Beneficial AI Society and completed her dissertation with distinction on the effects of probability on the moral responsibility of autonomous cars. During the final months of her degree, Olivia founded Ethical Intelligence, an ethics consultancy specialising in emerging technology. Olivia works as the Chief Executive Officer of Ethical Intelligence, where she leads a remote team of over thirty experts in the Tech Ethics field. She sits on the Advisory Board of Tech Scotland Advocates and actively contributes to the development of AI Ethics in business through keynote speaking and thought pieces.