Why patient education is key for AI implementation

If AI projects are going to succeed, says one stakeholder, patients must be encouraged to share their health information while being thoroughly educated about the benefits of doing so.
Jeff Rowe

AI is in the process of transforming disease diagnosis. The full implementation of AI will require access to vast amounts of personal health data.  Access to health data raises all sorts of privacy concerns.

You might call the above brief paragraph a widely used formula in AI discussions, these days, and the fact that it is so widely used suggests it’s an issue no one is quite sure how to resolve.

Take a recent commentary by Ross Upton, CEO and Academic Co-Founder of Ultromics, a developer of cardiovascular tools.

He notes, for example, that “ echo-based tools combined with deep clinical insight, machine learning, and some of the largest echo datasets in the world could reduce misdiagnosis of heart disease by more than 50 percent.”

But then he pivots to the aforementioned problem of the need for lots of sensitive information in order to train AI algorithms effectively.

A recent report from the Academy of Medical Royal Collages about the use of Artificial Intelligence in Healthcare sums up the conundrum nicely: “The UK Government and its health and social care systems have a legal duty to maintain the privacy and confidentiality of its citizens…However, the development of AI and machine learning algorithms relies on the use of large datasets.”

As Upton sees it, the solution to the problem is education for the public. “The best way to convince people to want to share their data,” he argues, “is by generating greater awareness of AI technology developments. Many people’s idea of AI is abstract at best, and at worst is the conception of a malevolent supercomputer. In reality, AI will manifest simply as smarter software running in the background, offering more accurate insights to the human practitioner.

Ongoing trials using the existing ethical framework continue to build the case for using data for AI training, and greater exposure of skeptical patients to AI-driven medical technology will improve the perception of this technology as it becomes increasingly widespread.”

Interestingly, he points out that, in fact, people aren’t necessarily “concerned about the use of their data for the development of life-saving technology, but rather that companies will make money from their data.”

The bottom line, he says, is “that most patients will consent to the use of their data if they fully understand the situation.” That situation is that by allowing the use of their data, healthcare can keep moving forward with new AI and other technological developments, thus improving outcomes both for themselves and for other patients.