How useful can AI be to mental health providers?

Can what AI might provide mental health therapists in terms of efficiency offset the loss of the human touch?
Jeff Rowe

Can a chatbot be a good therapist?

That’s one question which is coming up as mental health providers and technology experts consider the potential for AI as a provider, or at least an assistant provider, of mental health services.

In a recent overview of a number of efforts currently underway, technology journalist Akshaya Asokan described several startups that have combined AI and virtual reality to create a virtual therapist that can interact with is patients in real-time.

One US-based startup, for example, “uses machine learning capabilities to identify patients with a mental health condition and provide a customized treatment plan based on their medical history and behavioral pattern. The platform partners with health plans and systems to facilitate access to personalized care and also enable virtual collaboration between patient and trained specialists.”

In another case, “Ellie, a virtual therapist, was created by University of Southern California’s Institute for Creative Technologies and was funded by the US government to treat veterans suffering from PTSD. The virtual assistant could analyze facial expressions, head gestures, eye gaze direction and voice quality to identify behavioral indicators related to depression and post-trauma stress.”

Yet another case has involved the development of an “AI-Powered Genetic Counsellor” to help advise individuals and families at the risk of a genetic disorder by helping them understand the condition better and provide them with the much need mental support. Increasingly, more people are seeking the help of these professionals to better understand (their) genetic composition to predict whether they are likely to develop conditions like anxiety, schizophrenia and bipolar disorder.”

There are, of course, potential disadvantages to virtual therapists.  For example, “in conditions like PTSD, where a person can be overwhelmed by symptoms like flashbacks, nightmares and severe anxiety, the need for personal assistance and direct human intervention is more.”

On a broader level, AI systems aren’t always perfect, meaning there’s a chance of inaccurate disease detection and false recommendation of drugs.  And the cost reduction of using AI can be significant, there’s always the threat of hacking and data theft, and crucial information like a patient’s medical history and other personally identifiable data can be grossly misused.

And perhaps the biggest disadvantage of all, Asokan suggests, is while the promise of “these apps is the instant availability of someone to talk with you, the biggest drawback it faces is the lack of human touch.”

In the end, Asokan says, “even though machines and algorithms can mimic human emotions in speech and visual format, it is a long road ahead before completely relying on AI and ML capabilities in a field like psychological counseling, which require more human intervention than machine.”