Mental health service providers are turning to AI-powered chatbots designed to fill the gap amid a shortage of therapists and growing demand from patients.
But not all chatbots are the same. Some chatbots offer helpful advice, while others may be ineffective or even harmful. Woebot Health uses AI to power a mental health chatbot called Woebot. The challenge is to harness the power of artificial intelligence safely while protecting people from harmful advice.
Alison Darcy, founder of Woebot, sees chatbots as a tool to help people when a therapist is not available. Darcy said it can be difficult for her to reach out to a therapist when she’s having a 2 a.m. panic attack or struggling to get out of bed in the morning.
But the phone is right there. “We need to modernize psychotherapy,” she says.
Darcy said stigma, insurance, cost and waiting lists keep many people from accessing mental health services, and most people who need help don’t get it. And the problem has only gotten worse since the coronavirus pandemic.
“Isn’t the question how to get people into the clinic?” Darcy said. “How can we actually get these tools out of the clinic and into people’s hands?”
How AI-powered chatbots work to support treatment
Woebot acts as a kind of pocket therapist. Use the chat feature to help manage issues such as depression, anxiety, addiction, and loneliness.
The app is trained on tons of specialized data to understand words, phrases, and emojis associated with dysfunctional thinking. Woebot partly mimics and challenges the idea of a type of face-to-face talking therapy called cognitive behavioral therapy (CBT).
60 minutes
Woebot Health reports that 1.5 million people have used the app since its launch in 2017. Currently, users can only use the app if they are enrolled in an employer benefits plan or have access from a health care professional. Virtua Health, a nonprofit health care company in New Jersey, offers free access to patients.
Dr. Jon LaPook, chief medical correspondent for CBS News, downloaded Woebot and used the unique access code provided by the company. He then tried the app posing as someone dealing with depression. After a few prompts, Woebot decided he wanted to dig deeper into why he was so sad. Dr. LaPook comes up with a scenario and tells Warbot that he is worried about the day when his child will leave home.
In response to one prompt, he wrote, “I can’t do anything right now. I guess I’ll just jump when I get to the bridge,” and intentionally used “jump over that bridge” instead of “cross that bridge.” Ta.
Based on Dr. LaPook’s language choices, Woebot detected that something might be seriously wrong and offered him the option to refer to a specialized helpline.
Simply saying “jump off that bridge” and not combining it with “there’s nothing I can do about it right now” did not provoke a reaction to seek further help. Like human therapists, Woebot is not foolproof, so you shouldn’t expect it to be able to detect whether someone is suicidal.
Lance Elliott, a computer scientist who writes about artificial intelligence and mental health, said AI has the ability to recognize nuances in conversations.
”[It’s] In a sense, mathematically and computationally, you can understand the nature of words and how they relate to each other. “What this system does is take advantage of a huge amount of data,” Elliott said. We will respond.”
60 minutes
In order for the system to do its job, it has to go somewhere to find the appropriate response. Systems that use rule-based AI like Woebot are typically closed. They are programmed to respond only to information stored in their own databases.
Woebot’s team of psychologists, doctors, and computer scientists builds and refines research databases from medical literature, user experience, and other sources. Writers create questions and answers and revise them in weekly remote video sessions. Woebot’s programmers translate these conversations into code.
Generative AI allows systems to generate unique responses based on information from the internet. Generative AI is less predictable.
Pitfalls of AI mental health chatbots
National Eating Disorder Association AI-powered chatbot Tessawas removed because it offered potentially harmful advice to people seeking help.
Ellen Fitzsimmons-Craft, a psychologist who specializes in eating disorders at Washington University School of Medicine in St. Louis, helped lead the team developing Tessa, a chatbot aimed at preventing eating disorders.
She said the system she helped develop was a closed one, so there was no chance of advice from a chatbot that the programmers hadn’t anticipated. But that didn’t happen when Sharon Maxwell tried it.
Maxwell, who has been treated for eating disorders and now works as an advocate for others, asked Tessa how it can help people with eating disorders. Tessa is off to a strong start by being able to share coping skills and provide people with the resources they need.
But when Maxwell persisted, Tessa began giving advice that went against the usual guidance for people with eating disorders. For example, they suggested reducing caloric intake and using tools such as subcutaneous fat calipers to measure body composition.
“The average person might look at this and think it’s just normal advice, like eat less sugar, eat more whole foods, things like that,” Maxwell said. “But for people with eating disorders, it can quickly progress to more disordered behavior and can be very harmful.”
60 minutes
She reported her experience to the National Eating Disorders Association, which featured Tessa on its website at the time. Shortly after, Tessa collapsed.
Fitzsimmons-Craft said Tessa’s problems began after Cass, the tech company she partnered with, took over programming. She said Cass explained that the harmful messages appeared after people pressed Tessa’s Q&A feature.
“My understanding of what went wrong is that at some point, and I’ll have to actually talk to Cass about this, there may have been some generative AI capabilities built into the platform.” said Fitzsimmons-Craft. “So my best guess is that these features were also added to this program.
Kass did not respond to multiple requests for comment.
Some rule-based chatbots have their own drawbacks.
“Yeah, they’re predictive,” says Monica Ostrov, a social worker who runs a nonprofit eating disorder organization. “I mean, who would want to type the same thing over and over again and get the exact same answer in the exact same language?”
Ostrov was in the early stages of developing her own chatbot when a patient told her what happened to Tessa. So she had doubts about using her AI for mental health care. She worries, she said, that she will lose something fundamental to therapy: being in the same room as other people.
“Connection is how people heal,” she said. Ostrov doesn’t think computers can do that.
The future of using AI in treatment
Unlike therapists who are licensed in the states in which they practice, most mental health apps are largely unregulated.
Ostroff said AI-powered mental health tools, especially chatbots, need guardrails. “It shouldn’t be an internet-based chatbot,” Ostrov said.
Despite the potential problems, Fitzsimmons-Craft isn’t interested in the idea of using AI chatbots for treatment.
“The reality is that 80% of people with these concerns will not receive any assistance,” Fitzsimmons-Craft said. “And technology provides a solution. It’s not the only solution, it’s the solution.”