Researchers at Michigan State University (MSU) are hoping to detect early signs of Alzheimer’s disease with an AI-driven algorithm that will scan patient speech and vocabulary patterns.
MSU is working in collaboration with Oregon Health & Science University and Weill Cornell Medicine on the project, which aims to code an easily usable and accessible smartphone app to help assess whether a follow-up medical diagnosis is necessary.
"Alzheimer's is tough to deal with and it's very easy to confuse its early stage, mild cognitive impairment, with normal cognitive decline as we're getting older,” explained Jiayu Zhou, an associate professor in MSU's College of Engineering who leads a research group in the Department of Computer Science and Engineering and is leading the project’s AI development. ”It's only when it gets worse that we realize what's going on and, by that time, it's too late."
While there is currently no cure for Alzheimer’s disease, early detection could assist doctors and researchers in the development of treatment to slow or halt the disease before irreparable damage occurs.
Zhou believes AI can be more efficient and reliable in detecting subtle changes in speech and behavior than human observers, and developing an AI-based app would have the added benefit of making medical diagnostics more affordable and accessible.
Zhou and his team’s preliminary tests of the app have shown it is as accurate as MRIs in detecting early warning signs. Data for these tests were collected by collaborators at Oregon Health & Science University, who are studying how conversations could possibly act as therapeutic intervention for dementia or early Alzheimer’s.
"If we want to develop an app that everyone can use, we don't want to have people talking to it for hours," Zhou said. "We need to develop an efficient strategy so we can navigate the conversation and get the data we need as quickly as possible, within 5 to 10 minutes.”
While the team already had a prototype of the app that interviewed the users and recorded their audio responses, researchers further developed the questions the app asked, and how it asked them, to get the information from the user more quickly.
The team also wanted to bring in data beyond linguistic patterns that will help the AI make an assessment. For example, the app will examine acoustic signals of the conversation and it may also further leverage video to analyze facial expressions along with the words a user is saying. The team is also working on integrating behavior sensors that would track things like how much sleep a person is getting to supplement the app's interview language analysis.
Photo by wigglestick/Getty Images