AI in mental health: Where we’re at, and where we’re headed at warp speed

by | Jul 3, 2025 | English

Co-written by Holli Harris and Francheska Perepletchikova PhD

As we attempt to keep our finger on the pulse of the growing use of AI in mental health, (and its dialectic, that is, how using AI impacts mental health), this month we’ll highlight some high-level, broadbrush clinical themes to stay informed on, with the accompanying links for a deeper read. Note: Most of the content below is shared and cited from the writings of clinical psychologist Scott Wallace, PhD. If it resonates, you can follow him on LinkedIn.

Before we delve into the issues of using AI in mental health, let’s take a moment to evaluate where the AI spending “arms race” is as of today. A 27 June 2025 article in the New York Times entitled “ The A.I. Frenzy Is Escalating. Again.” documents the spending race between OpenAI (ChatGPT creator), Amazon and Meta (Facebook’s parent company) with the prize being the first to arrive at machine intelligence that meets or exceeds the human brain (called AGI) + building accompanying gargantuan data centers that power the AI models. Infrastructure investment is in the $ billions, signing bonuses for top AI researchers are in the $ millions and companies are poaching employees from each other. In turn, this battle fuels thousands of AI products that address perceived pain points in every industry and compete with each other, including in mental health.

So what are the largest pain points (or barriers) in the mental health “industry”?  Nothing you don’t already know: Supply shortage (not enough therapists in the world leading to long waiting lists or zero chance of support), affordability of and access to clinical training, the bottleneck of supervisory hours, therapist fees, in the U.S. inadequate insurance reimbursement rates, and language and cultural barriers to successful client outcomes. Given these barriers, and then adding the increase in demand for mental health therapy driven by digital products such as social media and smart phones, it’s no surprise that one of the fastest growing applications of AI is mental health support direct to consumers/clients, whether it’s specific AI products tailored to provide emotional support or simply consumers seeking therapeutic guidance from all-purpose AIs such as ChatGPT or AI companions. But we’re talking about an “industry” that specializes in cognitive restructuring, during a time when technology is already driving unhealthy cognitive restructuring on a global, probably evolutionary level. OK Deep breath. This is where we arrive at the following themes to consider:

  1. Algorithmic attachment: Yep, it’s a thing. Algorithmic attachment “describes the experience where users forge emotional ties with AI companions. Not out of delusion, but from a profound, unmet human need. Over time, this simulated relationship blossoms into a psychological anchor. It is always there. It never argues, it never disappoints, and it never tires…Attachment without reciprocity is not neutral. It can lead to dependence. It can alter how we relate to others. If a chatbot always ‘gets us,’ what happens to our tolerance for the mess of real human interaction?” Source: https://www.linkedin.com/pulse/when-ai-listens-what-do-we-lose-wallace-phd-clinical-psychology–niate/
  2. The illusion with chatbots of Therapeutic Alliance and “simulated sympathy”: Regarding the relationship between therapist and client, “This alliance is built on trust, empathy, collaboration, and mutual respect to create a safe environment for clients to delve into their experiences and foster personal growth.” Source: https://www.ncbi.nlm.nih.gov/books/NBK608012/ Chatbots do not grow with you, they do not struggle beside you, and they certainly do not change because of you. This isn’t some pedantic point for academics; it is an existential crisis unfolding before our eyes. If immediate, non-judgmental responsiveness is deemed good enough and if we are willing to settle for a mere simulation of care, then ‘good enough’ will swiftly become the norm. And when that happens, the deeply human work of therapy, in all its messy, transformative power, will become obsolete.” Source: https://www.linkedin.com/pulse/when-ai-listens-what-do-we-lose-wallace-phd-clinical-psychology–niate/
  3. Losing the healing journey of self-discovery : “There’s a deep and unsettling risk emerging here, more subtle than algorithmic attachment, but far more serious. AI systems are increasingly interpreting our inner states before we even say a word. Therapy used to be a space where we slowly and sometimes painfully worked toward understanding ourselves. We struggled to find the right words for what hurt, and the act of naming our pain was itself the beginning of healing. Now, an algorithm offers quick, pre-packaged insights: ‘You feel tired. You must be depressed.’ ‘You hesitated. You must be unsure.’ This isn’t just about whether the AI is accurate. It’s about our fundamental agency. The risk isn’t just a wrong diagnosis. It’s a slow, quiet erosion of our ability to define ourselves and to wrestle with our experiences to find meaning in them. When AI makes the path to understanding too short and too easy, we might gain speed, but we lose essential depth. We lose the vital, often messy, but ultimately transformative journey of self-discovery…The real danger isn’t that AI will fail to help. It’s already helping, and in significant ways. The true danger is that we stop demanding more from what we call ‘care.’ It’s that we begin to accept emotional simulations as truly adequate. It’s that we quietly, insidiously, rewrite the very definition of care to match only what machines are capable of offering.“

“We need to ask:

  • Does this AI tool truly foster genuine autonomy, or does it subtly cultivate dependence?
  • Does it encourage deep reflection, or does it offer a tempting shortcut that bypasses the meaningful struggle essential for growth?
  • Does it truly deepen our human relationships, or does it subtly replace them with simulated versions?
  • And, most crucially: What essential human capacities are we inadvertently allowing AI to atrophy or redefine in our relentless pursuit of efficiency?”

Source: https://www.linkedin.com/pulse/when-ai-listens-what-do-we-lose-wallace-phd-clinical-psychology–niate/

And this is why our stance on AI continues to be using it in ways that bring more human therapists into the world. No, it’s not a quick fix. It’s a systemic fix. And yes, we feel the pressure of seeing how the quick fixes may permanently impact human capacity for personal growth and connection. At a minimum, the use of AI as therapy chat bots forces us to clarify what it is that human therapists provide beyond synthetic algorithmic support.

More on that: Therapy is change and change is founded on acceptance. The main technique for communicating acceptance is validation. Since validation is a seemingly simple formula of “I understand, because…”, AI algorithms can readily mimic this formula. And in a world where people are starved for acceptance and understanding, validation can be seen as a hot drug that is now literally free to access. Too good to be true? A hard yes. Validation is based on shared meaning and similar experiences. Also, for validation to hit the spot, it needs to be genuine. AI’s level of genuine understanding of human suffering is on the level of a vacuum cleaner’s emotional response while sucking up a tissue soaked in tears. Or try asking yourself this: Have you ever attempted to get empathy from a machine? Why not? Specifically in DBT, validation is only a first step to gaining the capacity to develop emotion regulation skills.  And, knowing when to use irreverence with a severely dysregulated client is key. Enough said.

We hope this has given you food for thought and a narrative on using technology for training  more human therapists, not as a replacement.

Translate »