Intel and GE Health have teamed up to develop and deploy a new deep-learning AI tool that aims to cut the time between medical imaging, diagnosis and the beginning of treatment.
With the goal of offering physicians automated diagnostic alerts for some conditions within seconds of medical imaging being completed, the project enables X-ray technologists, critical care teams and radiologists to be immediately notified to review critical findings that may accelerate patient diagnosis.
According to Intel Internet of Things Group Health and Life Sciences Sector General Manager David Ryan, the AI imaging models are optimized for inference and deployment, then integrated into the GE application with the OpenVINO inference engine APIs. As X-ray images are acquired by the machine, the inference engine runs them for clinical diagnosis.
As GE Healthcare Senior Vice-President of Edison Portfolio Strategy Keith Bigelow explained, medical imaging is the largest and fastest-growing data source in the healthcare industry, but although it accounts for 90 per cent of all healthcare data, more than 97 per cent of medical imaging goes unanalyzed or unused.
“Before now, processing this massive volume of medical imaging data could lead to longer turnaround times from image acquisition to diagnosis to care. Meanwhile, patients’ health could decline while they wait for diagnosis,” he said. “Especially when it comes to critical conditions, rapid analysis and escalation is essential to accelerate treatment.”
According to Bigelow, a key use for this technology is providing earlier detection of a potentially life-threatening event – a collapsed lung, also known as pneumothorax. Now, he said, radiologists can now deploy optimized predictive algorithms that scan for and detect pneumothorax “within seconds at the point of care”, allowing rapid response and reprioritization of an X-ray for clinical diagnosis.
“Deploying deep learning solutions on existing infrastructure delivers the potential to power more efficient and effective care, enhance decision-making, and drive greater value for patients and providers,” he said.
According to Intel’s Ryan, deep learning has been a promising approach for radiology because its models can be trained to recognize desired features in an image, such as tumors or anatomies.
“Furthermore, training is done by giving numerous labeled example images to the models, without having to specify the exact features to look for. Deep learning can identify details that can be missed by the human eye,” he said.
In future applications, Ryan said, deep learning models can be used to identify incidental findings, as well as help radiologists manage their workload, enhance quality of scans, and reduce ‘retakes’, which can cause unnecessary exposure to radiation.