Since the introduction of digital tools into healthcare there’s been a simmering tension between the desire to make optimal use of new technology and the desire to protect patient privacy.
And while we may be just a few months into the global battle with the coronavirus, it already seems clear that one of the by-products of the struggle will be to ramp that tension up exponentially.
In a recent editorial at BMJ, three academics – two British and one Australian – argue that among “the most consequential transformations” the healthcare sector may experience from rapidly expanding efforts to track and combat the coronavirus may be the “new health surveillance technologies that use machine learning and automated decision making to parse people’s digital footprints, identify those who are potentially infected, trace their contacts, and enforce social distancing.”
Their point is not to warn against these efforts in their entirety, as they certainly recognize the potential value of an array of existing technologies that are rapidly being “repurposed” for use as tools to track patients and specific populations, both when it comes to determining who is infected and where those patients go after they are identified.
But their interest and support is accompanied by a number of related concerns.
One “obvious concern” they cite, for example, “is the high risk of false positive results generated by unreliable, biased, or non-transparent algorithms. Another is ‘surveillance creep,’ when surveillance developed for a limited purpose, such as fighting a pandemic or filming traffic violations, becomes used in ever more pervasive and permanent ways.”
As they observe, there is already no shortage of observers and activists wary “that much of the surveillance we accept today as ‘exceptional means for exceptional times’ is here to stay.”
But a potentially more disturbing observation they make concerns the impact extended surveillance can have on a society over time.
“Decades of data show that individuals and societies can only thrive in environments that satisfy basic psychological needs, including autonomy—a sense of having volition and choice in your actions,” they explain. “Surveillance can engender a sense of being controlled and be experienced as thwarting autonomy, with negative effects on motivation and wellbeing. It can also spur people to try to evade surveillance and reassert their autonomy.”
Some of their suggestions for safeguards against such impacts are largely techniques already in in use with other categories of health IT, with emphasizing “opt-in” uses and doubling down on privacy protections foremost among them.
But more broadly, they say, “a key challenge for designers and engineers developing health surveillance systems is to align products with the values of those under surveillance and to communicate to wider society both these underlying values and the reasons for surveillance.”
And that challenge is only going to grow, they note, as the ethical questions raised by health surveillance are only going to increase, and quickly.