Developers need thorough approach to eliminate AI bias

AI has the potential to make healthcare more accessible and efficient, says one industry expert, but it also is vulnerable to myriad biases that have been entrenched in society for generations.
Jeff Rowe

How can a computer be biased?

One simple way of answering that question is to observe that the answers one gets out of a computer depends an awful lot on the quality of the data that one puts in.

In a recent interview with HealthcareITNews, Henk van Houten, chief technology officer at Royal Philips, pointed out that “(w)e tend to take computer-based recommendations at face value, assuming that whatever output an AI algorithm portrays is objective and impartial. The truth is, humans choose the data that goes into an algorithm, which means these choices are still subject to unintentional biases that can negatively impact underrepresented groups.”

As an example, van Houten pointed to data “that doesn't sufficiently represent the target population. This can have adverse implications for certain groups. For example, women and people of color are typically underrepresented in clinical trials. As others have pointed out, if algorithms analyzing skin images were trained on images of white patients, but are now applied more broadly, they could potentially miss malignant melanomas in people of color.”

Another example obviously much in the news, van Houten says, is the use of AI to help fight COVID-19.  “Let's say you have an algorithm that is designed to prioritize care for COVID-19 patients. This could put populations lacking access to COVID-19 testing at a disadvantage, because if those populations are underrepresented in the training data, the algorithm may fail to factor in their needs and characteristics.”

In his view, one of the most important steps AI developers must take is to build “three types of diversity into every aspect of AI development.”

“First, diversity in people. We need to make sure that the people working on AI algorithms reflect the diversity of the world we live in. In a field that has historically been led by white male developers, we need to make every effort to encourage a more inclusive culture.”

Similarly, van Houten says, developers need to ensure diversity in data as they develop new tools. “Limited availability of high-quality data can be one of the biggest hurdles in developing AI that accurately represents a population. To promote the development of fair AI, we should aggregate robust, well-annotated and curated datasets across institutions in a way that protects patient privacy and captures diversity between and within demographic groups.”

Finally, the metrics used to validate algorithms should also be diverse. “Developed algorithms require thorough validation to ensure they perform as intended on the entire target population. This means they need to be assessed using not only traditional accuracy metrics, but also relevant fairness metrics.”

And then there’s what happens after new AI tools are out on the market. Says van Houten, “continuous monitoring after market introduction will be necessary to ensure fair and bias-free performance. Ideally this would include finding a way to validate the representativeness of new learning data – in a way that respects ethical, legal and regulatory boundaries around the use of sensitive personal data.”