Report: Post-COVID, AI needs improved infrastructure

The use of AI beyond the pandemic will require significant assessment of the technology as well as ethical considerations.
Jeff Rowe

Given AI’s potential for, among many other things, expediting and enhancing diagnoses, it’s no surprise providers and researchers were eager to turn to it as a new tool in the battle against COVID-19.  

But if AI is going to be a significant part of the public health arsenal beyond COVID-19, policymakers will need to increase training and validation, as well as enhance data governance and privacy protections.

That’s according to a new report from the American Association for the Advancement of Science (AAAS), which recognized the role AI has played but also the extent to which the flurry of uses during the pandemic has made oversight of new technologies challenging, to say the least.

“At the onset of COVID-19, there was a clear demand for using AI to fight the pandemic. However, no one was looking at the entire picture of how AI was in fact deployed and what ethical or human rights questions were arising from their implementation,” said Jessica Wyndham, director of the AAAS Scientific Responsibility, Human Rights and Law Program and a co-author of the report. “We wanted to see the implications of these selected applications, paying particular attention to underserved populations. We wanted to see what worked, what didn’t and what we could learn from that for any future health crises.”

Among other observations, the report emphasizes some of the technical and ethical concerns that could come with the use of AI applications after the pandemic has subsided.

“Of particular significance from an ethics and human rights perspective are certain details of the implementation of the contact tracing applications, in particular, whether the application uses a centralized database, its broadcasting method, and the nature of participation (mandatory or voluntary),” the authors stated, adding that expanding the use of AI-powered contact tracing applications must include ensuring their scientific and technical validation.

In addition to contact tracing applications, the use of AI-driven medical triage solutions poses several concerns around bias and training. In particular, gathering enough data necessary to adequately train these AI models is a major obstacle for healthcare stakeholders.

“Although the need for data sharing is understandable, the related privacy and ethical concerns call for a careful balancing act. In addition, the gathering and sharing of health and even biological data without patients’ consent has been historically abused. The issue of data sharing is important as it exposes deep mistrust in government, particularly in African American communities,” the report stated.

“The human impacts of the AI-based technologies used in the context of the pandemic are potentially immense, be they at the individual scale in the context of medical triage, or the societal scale in the context of contact tracing,” the report concluded.