While technology is usually intended to introduce greater convenience or efficiency into our lives, it also tends to bring no small amount of anxiety. And, far from being an exception, Artificial Intelligence (AI) may be the biggest example yet.
As tech writer Mark Samuels recently explained in a recent article for ZDNet, (r)ather than just being a technology that people will themselves use, some experts believe AI could instead help to replace human decision-making at work and at home.”
The question for AI developers and advocates, then, including those in healthcare, is “how can businesses work to reduce fears and create AI systems that exploit big data ethically?”
While Samuels looks at AI from a broad-based business perspective, much of what he discusses is easily applicable to healthcare.
For example, he quotes Anastasia Dedyukhina, founder of the London-based tech consultancy Consciously Digital, from a panel discussion of tech experts at the recent Big Data World event in London, who says businesses must help develop a better understanding of AI.
"More decisions are being taken for us by machines, anything from how much you pay for health insurance to who your life partner should be," Dedyukhina noted. "As technology is affecting more elements of our lives, we should all have a say in this – we need to make sure people understand the consequences of AI, what it is and how people are different from computers.
"The ethical way to collect data is to do it in a way that actually improves the customer experience and to explain to them why you are collecting this information," she said. "The next step beyond that is to make it easy for your customers to opt out of data collection if they want to. Don't make it so complicated. Give control back to the customer."
Similarly, Adrian Baker, policy manager for AI at the British Heart Foundation (BHF), noted that patients “feel very differently about sharing data than the general public” – and that might be because patients have to consider how their information is being used to create the right health outcomes.
"The research highlights how everyone involved in the use of AI and big data must have wider discussions about the outcome you're looking for, such as better health, and then work backwards to issues like data sharing and information security. You should always start with the outcome," he said.
Baker suggested business leaders looking to ensure they focus on the right objectives for AI and data should consider establishing a public ethics board. “Just like companies have executive boards to make decisions, these ethics panels can help organizations that are using emerging technology to make publicly minded decisions.”
Bertie Müller, senior lecturer in computer science at the University of Swansea, also agrees that public awareness is critical to the ethical development of emerging technology. "We clearly see how AI can create benefits, but we need to trust it," said Müller, whose research explores how organizations can create transparency around automation.
Perhaps ironically, given healthcare’s habit of lagging behind on tech trends, the sector may actually be in the lead when it comes to developing ethical standards for AI.
According to Adrian Baker, the UK National Health Service “is creating frameworks, guidance and standards that are reviewed and updated every six months. And I think this might be the way to go – to provide a framework or a set of standards that is agile enough to keep up with, not only the pace of technological development, but also the public's view.”