If you’re looking for a clear assessment of a new technology in healthcare, sometimes it helps to turn to those who might have to pay if something goes wrong.
To wit, The Doctors Company, which bills itself as the nation’s largest physician-owned malpractice insurer, has recently come out with a report that looks at the pros and cons of AI in healthcare.
Looking at a range surveys, assessments and case studies, the report aims to provide a comprehensive look at the prospects for AI as a growing phenomenon in healthcare, as well as the benefits it will bring and the risks for which providers should be on guard.
The impact of economics kicks off the report, as the writers point out that startup companies developing AI healthcare raised “a record amount of funding” in the second quarter of 2019 alone, raising $864 million through 75 deals.
Moreover, in surveys conducted by The Doctors Company, 35 percent of respondents said they currently use AI in their practice, while an additional 53 percent said they’re optimistic about the prospects of AI in medicine, and 66 percent saying they believe it will lead to faster and more accurate diagnoses.
After the initial review of investment and expectations, the report turns to looking more in-depth at a range of the benefits of AI – enhanced image scanning and segmentation, speedier and more accurate disease detection and integration and improvement of workflow – as well as the potential challenges – false positives/negatives, unclear lines of accountability and network systems vulnerable to malicious attack.
On the topic of reading diagnostic images, for example, the report predicts that “(t)he advent of systems that can quickly and accurately read diagnostic images will undoubtedly redefine the work of radiologists and assist in the prevention of misdiagnoses. . . Although the best machine learning systems are possibly only on a par with humans for accuracy in making medical diagnoses based on images, experts are confident that this will improve over time as developers train AI systems on millions-strong databanks of labeled images showing fractures, embolisms, tumors, etc. Eventually these systems will be able to recognize the most subtle of abnormalities in patient image data (even when indiscernible for the human eye).”
As for the risks, the report points out that, among other potential challenges, “(m)odels trained on partial or poor data sets can potentially show bias towards particular demographics that are represented more fully in the data (e.g., Caucasian). This could create high potential for poor recommendations, like false positives.”
Interestingly, the report explains that stakeholders should expect “that these risks will be supplemented or expanded when the technology is more widely adopted, and there is enough data on patient safety and malpractice claims related to its use,” as that’s how patterns of pros and cons emerged with EHRs and, says the report, are beginning to develop with telemedicine.
In the end, the goal is naturally not to slow the adoption of AI but to use past experience with other technologies to prepare for the challenges and downsides.
In summary, notes the report, “Thoughtful physicians need to anticipate not only the exciting potential for AI to improve patient care, but also the dangerous unintended consequences that may arise. Only in practice will we truly understand the potential of this powerful new tool.”