Google’s AI firm targets kidney disease, draws critics

Working with the VA, the DeepMind team applied AI technology to a de-identified electronic health record dataset collected from a network of over a hundred VA sites.
Jeff Rowe

DeepMind, the Google-owned UK AI research firm, recently published a research letter in the journal Nature touting a new model’s ability to predict the likelihood of a patient developing a life-threatening condition called acute kidney injury (AKI). But some observers have wasted no time in pointing what they consider flaws in the study’s methodology.

According to a DeepMind blog post unveiling the research, the paper demonstrates artificial intelligence can predict “one of the leading causes of avoidable patient harm” up to two days before it happens.

“This is our team’s biggest healthcare research breakthrough to date,” it adds, “demonstrating the ability to not only spot deterioration more effectively, but actually predict it before it happens.”

The problem, say critics, is that the data used to develop the deep learning model, data provided by the US Department of Veteran Affairs (VA), skewed overwhelmingly male: 93.6%.

When asked about the model’s performance capabilities across genders and different ethnicities, a DeepMind spokeswoman said: “In women, it predicted 44.8% of all AKI early, in men 56%, for those patients where gender was known. The model performance was higher on African American patients — 60.4% of AKIs detected early compared to 54.1% for all other ethnicities in aggregate.

“The data set is representative of the VA population, and we acknowledge that this sample is not representative of the US population.  As with all deep learning models it would need further, representative data from other sources before being used more widely. For the model to be applicable to a general population, future research is needed, using a more representative sample of the general population in the data that the model is derived from. 

“Our next step would be to work closely with [the VA] to safely validate the model through retrospective and prospective observational studies, before hopefully exploring how we might conduct a prospective interventional study to understand how the prediction might impact care outcomes in a clinical setting.”

According to the study’s critics, however, what DeepMind’s ‘breakthrough’ research paper underlines is the “reflective relationship between AI outputs and training inputs.”