Philips exec releases five ‘guiding principles’ for ethical AI development

Among other things, he says, the development of AI-enabled solutions in partnership – between providers, payers, patients, researchers, and regulators – is a way of ensuring optimal transparency.
Jeff Rowe

The quest for more and better AI across the healthcare sector is moving along at a brisk clip, to put it lightly, but as healthcare stakeholders come up to speed on the technological possibilities for their organizations, there remain no few ethical questions when it comes to how best to implement and use AI.

Likely in part due to his company’s own prominent position in both the technological effort and the ethical discussion, Henk van Houten, executive vice president and chief technology officer for Royal Philips, recently published a list of five guiding principles for the design and responsible use of AI in healthcare and personal health applications in a blog post on his company’s website.

As he sees it, questions such as “To what extent can we rely on AI algorithms when it comes to matters of life and death or personal health and well-being?” are too important to left for further down the AI development road. To the contrary, he says, “They compel us to think through proactively – as an industry and as individual actors – how we can best advance AI in healthcare and healthy living to the benefit of consumers, patients, and care professionals, while avoiding unintended consequences.”

The five principles – well-being, oversight, robustness, fairness, and transparency – all stem from the basic viewpoint that AI-enabled solutions should complement and benefit customers, patients, and society as a whole.

First, says van Houten, well-being should be the top priority when developing healthcare AI solutions.

“Much has already been written about how AI can act as a smart assistant to healthcare providers in diagnosis and treatment, he writes, “(b)ut to truly benefit the health and well-being of people for generations to come, we need to think beyond the current model of reactive ‘sick care’. With AI, we can move toward true, proactive health care.”

His next priority is oversight, and here he calls for bringing together AI engineers, data scientists, and clinical experts”to ensure proper validation and interpretation of AI-generated insights.”

Third is “robustness” in the form of “a robust set of control mechanisms (that) will help to instill trust while mitigating potential risks.”

For example, he says, “One way of preventing (inadvertent) misuse is to monitor the performance of AI-enabled solutions in clinical practice – and to compare the actual outcomes to those obtained in training and validation. Any significant discrepancies would call for further inspection.”

Next comes “fairness,” and van Houten is certainly alone in this regard, as many stakeholders have added their concerns, lately, when it comes to the need to avoid data bias and accurately representing the concerns and needs of different populations.

Finally, he says, “as a fifth and final principle, public trust and wider adoption of AI in healthcare will ultimately depend on transparency. Whenever AI is applied in a solution, we need to be open about this and disclose how it was validated, which data sets were used, and what the relevant outcomes were. It should also be clear what the role of the healthcare professional is in making a final decision.”

Nobody has all the answers, he recognizes, and new questions are bound to arise, but, he writes, “(t)aken together, I believe these five principles can help pave the way towards responsible use of AI in healthcare and personal health applications.”