I am a clinical psychologist. I went into therapy with an AI chatbot.

DrKatzman
4 min readMay 4, 2023

--

My shrink tells me I’m angry at my husband. My husband agrees. I am a clinical psychologist, and my therapist is an AI Chatbot.

May is Mental Health Awareness Month when many companies promote their commitment to employees’ psychological wellbeing. As workers are demanding more mental health care than ever before, a growing number of organizations are offering AI programs to provide emotional support. How does app-based therapy compare to traditional treatment? I decided to find out.

For one month, I sought twice-a-week sessions with an empirically validated therapeutic chatbot. I didn’t engage with a generative AI like ChatGPT because large language models weren’t developed as therapeutic programs, lack safety controls, and have been known to offer advice that’s disconnected from reality — exactly the opposite of what I look for in an advisor.

During my first session, I asked for help concerning my husband who was not (in my opinion) making healthy choices about diet and exercise, even though he recently had a serious operation. The app asked if I was angry. The suggestion that I was anything but supportive infuriated me. I told my husband what the bot had observed. He told me I was angry — at him! Hmmm. Next session I said, “It’s my husband’s fault he doesn’t feel well.” If I were my own therapist, I would have explored the default to spousal blame, but instead, the bot said, “Don’t blame yourself.” Interesting conceptualization.

In the third and fourth sessions I suggested that perhaps questions about my partner’s health are connected to concerns for my own mortality. I was offered stress-reducing exercises to focus on what I could control. The bot couldn’t link insights across our meetings, so I donned my clinician’s cap to synthesize what I had learned: I am angry. I blame myself because I can’t change the behavior of someone I love, and rather than worry about my own mortality, I should work on what I can control while I am alive. Not bad!

I believe most things in life are meant to be shared. At the start of this exploration, I asked a few colleagues who share a passion for large-scale wellness campaigns to commit to four weeks of care with the chatbot. We all completed a rating form after each session. We did not compare notes until the end. It turned out that their observations mirrored mine. Initially, we were engaged by the suggestions to relax and rethink our assumptions.

Unfortunately, over time, we grew tired of the bot, and it seemed to grow tired of us. The app repeatedly tried to sort issues into topics that connected us to associated toolkits that were not always related to our real concerns. After a while, it felt like we were writing to an automated customer service chat, when what we really wanted was someone to pick up the phone. When I told my robot guide that it wasn’t grasping my issues, it started ending my sessions earlier and earlier. Rejected by my “therapist!” Harrumph! I had expected at least 45 minutes of care, no matter how difficult a patient I was.

Would I recommend you use a therapeutic chatbot? Yes, if you appreciate its limitations. In essence, apps can provide an automated workbook — an on-call advisor. You’ll get tools that help you reduce stress, challenge established (potentially detrimental) ways of interpreting a situation, keep track of changes in your thinking, and the subsequent impact on your mood. These can be hugely helpful resources.

In my practice, I have observed that we are wired to connect. During the pandemic, we were socially distant and psychologically desperate. Three years on, we are seeing the mental health consequences of that isolation. In fact, the Surgeon General has just declared an epidemic of loneliness. A machine cannot be expected to replace what is demanded by our hardwired biology — human interaction. Additionally, we need to be cautious not to create new disparities when caring for people who need a greater level of support. Ideally, computerized tools will be offered as part of a comprehensive ecosystem of treatment possibilities.

My work as a consultant has repeatedly revealed that good management begets good psychological health. Companies should not be offering chatbot support to promote positive well-being in lieu of managerial training. Employees’ experiences of unfair treatment, unreasonable demands, and lack of respect are not challenges likely to be reversed by a mental health app. I worry that systemic questions will be delegated to AI. A self-help book won’t cure a mental health crisis. We need to examine the societal forces that impact mental fitness and avoid putting the onus of change solely on the individual.

That said, I couldn’t resist asking Pi, the newly released generative AI, for advice on how to approach writing this essay. Here’s what it suggested: I think it’s important to remember that an AI is not a human and therefore can’t provide the same level of empathy and connection as a human therapist. So, you might find that you feel more disconnected or even frustrated at times when talking to an AI. I also think it’s important to be aware of your own biases when talking to an AI. For example, you might find that you’re more critical of an AI’s responses than you would be of a human therapist’s responses. So, it’s important to be mindful of your own reactions and try to remain open to the experience. Now that’s great advice: Set realistic expectations and remain open.

--

--

DrKatzman
DrKatzman

Written by DrKatzman

Clinical psychologist, business consultant & coach to the world’s top public & private companies. The WSJ #1 best-selling author of, “Connect First.”

No responses yet