What’s in your AI ethics policy?
It’s probably safe to say many healthcare providers couldn’t answer that question because they don’t even have one. At least not one that’s been formalized and published for employees and patients alike.
But in a recent blog post, Cerys Wyn Davies, a partner in the UK-based, global law firm Pinsent Masons, argues that since the use of AI raises numerous ethical issues for organizations including healthcare providers, an official policy is critical to developing trust clients need and increasingly expect.
It’s one thing for life sciences companies and healthcare providers to understand their legal and regulatory obligations while recognizing the ethical guidance that already exists, she writes, “but quite another to form and articulate a policy that allows the organisation to meet the requirements and expectations in practice.”
Much of her blog is spent giving an overview of the form of ethics policies that have been developed and adopted by government organizations such as the European Commission along with some large pharmaceutical and life science organizations.
AstraZeneca, for example, has shaped its policy on principles that revolve around five core themes: Explainable and transparent; Fair; Accountable; Human-centric and socially beneficial; and Private and secure.
The company promises to be “open about the use, strengths and limitations of our data and AI systems”, she says, “to ensure humans oversee AI systems, to ensure data and AI systems are secure, and to ‘act in a manner compatible with intended data use’. It also states that it anticipates and mitigates the impact of potential unfavourable consequences of AI through testing, governance, and procedures, and further promises to learn lessons from “unintended consequences” materialising from its use of AI.”
Merck, on the other hand, has taken a different approach.
According to Wyn Davies, “currently, a bioethics advisory panel, and a subsidiary digital ethics advisory panel, guide the company’s approach to tackling ethical issues that arise in its business and research. The company is developing a new code of digital ethics and stated that it believes patients and healthcare facilities will be ‘more likely to share data with a partner that adheres to a clear set of guidelines’.”
Regardless of the specifics, any policy, and the trust that will ideally come with it, “is only as good as the governance framework that underpins it. In the context of AI, governance around data use and sharing is of particular importance, so additional thought must be given to governance models that facilitate access to large data sets to improve the quality of data input and the integrity of outcomes, while ensuring patients have control over how their data is used, who has access to it and for what purposes.”
Photo by imaginima/Getty Images