Can genetic disorders be identified just by looking at a patient’s face?
According to a study released recently in the journal Nature Medicine, they can be using a new artificial intelligence technology called DeepGestalt. The AI outperformed clinicians in identifying a range of syndromes in three trials, researchers reported, noting that 8% of the population has diseases with key genetic components, and many have recognizable facial features.
For example, the technology could identify Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth, strabismus, where the eyes point in different directions, or a protruding tongue.
"It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great," said Yaron Gurovich, chief technology officer at FDNA, an artificial intelligence and precision medicine company, who led the research.
This opens the door for future research and applications, and the identification of new genetic syndromes, he added.
But with facial images being easily accessible, this could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications, the authors warned.
Gurovich and his team trained DeepGestalt, a deep learning algorithm, by using 17,000 facial images of patients from a database of patients diagnosed with over 200 distinct genetic syndromes.
The team found that the AI technology outperformed clinicians in two separate sets of tests to identify a target syndrome among 502 chosen images. In each test, the AI proposed a list of potential syndromes and identified the correct syndrome in its top 10 suggestions 91% of the time.
Another test looked into identifying different genetic subtypes in Noonan syndrome, which carries a range of distinctive features and health problems, such as heart defects.
"We showed that this system can be used in clinical settings," Gurovich said of the results.
The technology works by applying the deep learning algorithm to the facial characteristics of the image provided, then producing a list of possible syndromes. It does not, however, explain which facial features led to its prediction. To help the researchers better understand, the technology produces a heat map visualization looking at what regions of the face contributed to the classification of diseases, explained Gurovich.
All the images used in the trials were from patients already diagnosed with a condition; the technology didn't identify whether each patient had a genetic disorder, but identified possible disorders that had already been diagnosed.