Feds lay out cautions concerning AI bias

Research has highlighted how ostensibly “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes.
Jeff Rowe

Does technology – in particular, AI – have a problem with bias?

At first glance, it’s tempting to dismiss the concern given that, as with any computing process, the data coming out of the process can only be as good as the data going in.  Therein lies the rub, of course, as databases can be incomplete, skewed, misleading and, yes, biased in how accurately they reflect the reality of all sorts of conditions, including the population-wide health conditions when considered by race, gender or ethnicity.

In a recent commentary, attorney Elisa Jillson takes a look at how the Federal Trade Commission (FTC) scrutinizes developments in AI with an eye toward preventing discrimination, and while the article looks across the AI landscape there are a number of considerations for healthcare stakeholders to bear in mind.

For example, Jillson points first to the need to begin “with the right foundation . . . From the start, think about ways to improve your data set, design your model to account for data gaps, and – in light of any shortcomings – limit where or how you use the model.”

Along the way, she urges, AI developers should keep an eye out for discriminatory outcomes, and she notes the importance of testing “your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.”

Simultaneously, she says, “think about ways to embrace transparency and independence – for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.”

The question that is constantly before AI developers and users in any sector, says Jillson, is “how can we harness the benefits of AI without inadvertently introducing bias or other unfair outcomes?”  The best way to ensure the right answer to that question given changing circumstances and demands is to consistently hold yourself accountable.

That can be challenging given the desire to roll out ever-new, more beneficial algorithms to help solve patient problems.  But as Jillson points out, if AI stakeholders to keep tabs on their technology, the FTC will be sure to do it for them.

Photo by NicoElNino/Getty Images