We pointed recently to the potential for AI to give a boost to mental health providers, and out of the UK comes another commentary that digs even more deeply into the technical possibilities.
Amlan Basu is CMO at The Huntercombe Group, a UK firm specializing in Child and Adolescent Mental Health Services, and while he touts the potential for AI in his world he strikes an initially cautionary tone, noting that there’s a real danger that mental health professionals will “fail to engage in AI’s development, uses and limitations, only to awaken one day to find that the delivery of mental healthcare has permanently changed, seemingly without notice or consultation.”
On the technical side, Basu cites the capacity of AI to organize and analyze the “significant number of variables” that mental health providers factor into their diagnoses, “including assessing a patient’s appearance and behaviour, their speech, their reported mood and thoughts, their perceptions, cognitive ability, and insight.”
Now, Basu notes, “the digitisation of this data – the digital phenotype – is possible through many routes, including the analysis of an individual’s speech, voice, and face, how they interact with a keyboard or their smartphones and through various wearable sensors. (And) analysis of the digital phenotype has led to some eye-opening findings.”
For example, “Just the in-depth analysis of the human voice (e.g., pitch, volume, jitter) has been able to predict marital difficulties as well as, if not better, than therapists, and whether at-risk youths transition to a psychotic illness with 100% accuracy, outperforming classification from clinical interviews.”
Interestingly, while Basu suggests his colleagues tend to feel “most immune to the impact of AI” when it comes to actually treating patients, he points to the evidence that many patients are more comfortable revealing their inner thoughts to a ‘virtual’ human than to a real one.
Consequently, he says, it’s not surprising that “there are now many apps that deliver talking treatments through ‘virtual’ interactions with therapists rather than face-to-face interactions; a meta-analysis showed that depressive symptoms improved significantly through this medium.”
Despite the range of potential, Basu recognizes that there’s no shortage of issues, including ethical ones, that need to be considered when it comes to AI in mental health. “For example, who is accountable when calculating and considering suicide risk? How do we maintain confidentiality of sensitive data, and how do we handle someone disclosing potential risk that they pose to others or themselves?”
Those concerns aside, he remains convinced “that AI-assisted efforts stand a genuine chance of reducing a huge global burden of disease, so much of which currently goes totally unseen and untreated by health care professionals.”
Photo by Ryzhi/Getty Images