AI: more human than you think

The most important thing to remember about AI, says one stakeholder, is that it has the same vulnerabilities as human beings.
Jeff Rowe

The great thing about AI is that it’s free of human failings, right?

Wrong.

And that, says one longtime health IT consultant, is arguably the most important thing for healthcare providers and organizations to remember as they inevitably expand their AI capabilities.

In a recent commentary, Greg Freiherr, founder of the The Freiherr Group consulting service, says, “Put simply, AI has the same vulnerabilities as people do. This applies especially to machine learning (ML), the most modern form of artificial intelligence. In ML, algorithms dive deep into data sets. Their development may be weakly supervised by people. Or it may not be supervised at all. This laissez-faire approach has led some to believe that the decisions of ML algorithms are free from human failings. But they are wrong.”

For one thing, Freiherr points out, it’s important to remember that even “self-taught deep learning algorithms” are learning within the parameters established.  Similarly, the data on which they are learning are gathered by people.  Consequently, the possibility of human biases and prejudices being incorporated into the resulting algorithms is very real.

Research shows that biases have crept into AI algorithms used in fields such as criminal justice, Frreiherr says, and “even if precautions are taken, and the developers of ML algorithms are more disciplined than software engineers elsewhere, there is still plenty of reason to be wary of AI.”

For instance, testing results could be skewed if data is not included on certain specific patient groups, Freiherr says, and as an example he points out that while most clinical testing is performed on adults, “that doesn’t keep the makers of OTC drugs from extrapolating dosages for children. But algorithms trained on or just analyzing incomplete data sets would not generate results applicable to the very young or very old.”

The bottom line, says Freiherr, is that “AI is not the answer to human shortcomings. Believing it is will at best lead to disappointment.”

Deep learning algorithms that tap data sets even millions of times will be only as good as the data into which they dive. 

The risk “is not that smart algorithms will one day become too smart,” Frieherr cautions.  It’s that, without proper precautions, the technology to which providers will increasingly turn for technological objectivity will, in fact, turn out to be all too human.