AI Therapy: Dangerous Waters
- mlakas1
- Mar 5, 2025
- 2 min read

Before diving into the rampant speculation surrounding ChatGPT-5, I want to take a moment for something more contemplative. In my last article, we discussed the Emotional Intelligence “EI” breakthrough in #OpenAI’s ChatGPT 4.5—an important advancement with profound implications.
Unfortunately, one of the more irresponsible applications of large language models (LLMs) has been their use as stand-ins for mental health therapists. Even before general release of ChatGPT 4.5, there are already scores of such apps on the market, and I highly doubt that any of them have been meaningfully vetted by accredited institutions. This is playing with fire.
The Danger of Sycophancy
One of the fundamental issues with #AI chatbots in a therapeutic setting is their tendency toward sycophancy, the inclination to mirror, amplify, and validate whatever the user expresses. AI models excel at telling people what they want to hear, which, in a mental health context, can lead users down harmful and even dangerous paths. Mental health is deeply nuanced, and without proper guidance, AI responses can easily reinforce negative thought patterns, enable self-destructive behaviors, or provide misleading reassurance. See MIT Technology Review.
Flimsy Guardrails
While companies attempt to implement safety guardrails, experience has shown that these protections are far from foolproof. The sheer complexity of LLMs means that controlling their behavior in every context is likely impossible. AI developers can try to make chatbots "Fisher-Price safe," but the reality is that users will always find ways to push the limits—sometimes unintentionally, sometimes deliberately.
Need for Framework for Responsible AI in Mental Health
Rather than letting the market run wild with unregulated AI "therapy" applications, we should be working as a society to establish a clear framework for how these tools can be responsibly integrated into mental health services. Small steps were taken during the Biden Administration with Executive Order 14110 (Wikipedia link), which aimed to create guardrails for AI applications, including those in healthcare.
Regrettably, this Executive Order was rescinded by the current administration, leaving a regulatory void at a time when AI-powered mental health tools are proliferating at an alarming rate. Without thoughtful oversight, we risk creating systems that, despite good intentions, do more harm than good.
AI will have a supporting role to play in mental health, assisting professionals with research, administrative tasks, and preliminary screening but it must be an adjunct to and not replacement for human expertise. The stakes are simply too high to trust this deeply unpredictable technology with people’s mental well-being.



Comments