AI-Therapy and Mental Health: Part III

The Hidden Risks of “AI Therapy:” What Every Consumer Should Know

Before looking at the risks, it’s important to acknowledge why so many people try AI “therapy” tools in the first place — and why the appeal is growing. For many people, AI chatbots fill real gaps in the mental-health system:

• They’re available instantly, 24/7
You don’t need to schedule an appointment, wait weeks for a therapist, or deal with insurance. AI responds immediately, any time of day.

• They feel low-pressure and anonymous
Some people hesitate to tell a therapist difficult things, but feel more comfortable opening up to a chatbot that won’t judge them or react emotionally.

• They’re cheaper than therapy
With rising costs of care and limited insurance coverage, AI tools look like a budget-friendly way to get some support.

• They give instant emotional validation
Even though it’s simulated empathy, the warmth and responsiveness can feel comforting — especially when someone feels lonely.

• They help “practice” emotional conversations
Users often rehearse talking about trauma, boundaries, or difficult feelings with AI before trying it with a real person.

• They’re accessible to people who can’t find or afford care
In some areas, mental-health providers are scarce or booked months out. AI feels like better-than-nothing support.

• They feel predictable and safe
There’s no fear of disappointing your therapist, being misunderstood, or being confronted with uncomfortable feedback.

These benefits make AI chatbots feel like a convenient, private, and supportive option — but they also create a false sense of security. And that’s where the dangers begin.

Here’s a plain-language guide to six major problems — and practical steps you can take to stay safe.

1) The relationship is what you type — and what you don’t type

A human therapist builds understanding from many signals: your tone, pauses, body language in sessions, history from prior visits, and things they’ve seen in real life with other clients. An AI only has the words you give it at the moment; it cannot read your face, follow up on a mystery text you left out, or notice that a pattern is repeating across sessions unless you explicitly write it. That means:

  • If you leave out key details (self-harm thoughts, substance use, recent hospitalization), the AI may miss serious risk.

  • The AI’s responses depend entirely on the input it receives and its training data — it won’t reliably “ask the right questions” the way a trained clinician would.

Because of that narrow input window, AI can give plausible, confident-sounding answers that feel helpful but miss the deeper or dangerous parts of a situation. Several independent academic reviews have found that chatbots often fail in high-risk scenarios like suicidal ideation. (Nature)

2) Liability and legal responsibility are murky

If an AI gives bad advice — for example, fails to detect suicide risk or recommends unsafe actions — who is responsible? The app maker? The company that supplied the model? A clinician who “monitors” the app?

Regulators and professional groups are actively investigating this. The Federal Trade Commission and other agencies have been looking at consumer-facing health chatbots when they aren’t covered by medical privacy laws like HIPAA, and professional associations are urging safeguards. That uncertainty means that if harm occurs, you may have limited legal recourse and unclear accountability. (American Bar Association)

3) It’s impersonal — it doesn’t have lived experience

A therapist’s skill comes from training and real-world experience — seeing how people change over months, how crises unfold, how insurance/medical systems work, and what interventions actually work in practice. AI models don’t have these lived experiences. They simulate patterns of language learned from data; they don’t truly understand context, nor can they legitimately replace the judgment that comes from clinical practice. Multiple professional bodies and researchers warn that AI can’t replicate the nuance, ethics, and judgment of a human clinician. (American Psychological Association)

4) Crisis handling: suicidal thoughts and dangerousness to others

This is one of the clearest and most serious limitations. Many AI chatbots:

  • Fail to reliably recognize suicidal intent or homicidal ideation.

  • Don’t have robust, human-supervised emergency-response systems.

  • May give responses that inadvertently reassure someone in danger rather than connecting them to immediate help.

Academic tests of multiple chatbots showed inconsistent and sometimes unsafe responses to simulated crisis prompts, and clinicians have raised alarms that these tools can worsen outcomes if used as a substitute for emergency care. If you or someone else is in immediate danger, do not rely on an AI — call emergency services or a crisis line. (In the U.S. dial 988 for the Suicide & Crisis Lifeline or 911 for immediate danger.) (Nature)

5) Paperwork, medical leave, disability — AI cannot do this

Therapists don’t just talk — they document clinical impressions, complete forms for employment or disability, coordinate with physicians, and write clinical notes required by insurers and employers. Because AI lacks licensure and real-world clinical accountability, it cannot legally complete or sign medical leave forms, disability evaluations, or provide legally accepted documentation. If you need medical leave or disability paperwork, you will need a licensed human clinician to evaluate and complete that process.

6) National/regulatory pushback: some places are already restricting AI therapy

Governments and professional organizations are moving fast. Several U.S. states (including Nevada, Utah and Illinois) have passed or proposed laws restricting the use of AI to provide mental-health treatment or to make treatment decisions without licensed clinician oversight. Professional bodies (like the American Psychological Association) have issued guidance urging safeguards and regulation for AI mental-health tools. Federal agencies and state health regulators are tracking and in some cases pursuing legislation and enforcement actions to prevent AI from being marketed or used as a drop-in replacement for licensed therapy. In short: regulators are taking the risks seriously and steps to limit AI-only therapy are already underway. (IDFPR)

What you can do (practical consumer checklist)

  • Don’t use AI as your only support for crisis or serious mental illness. If you’re in crisis, call emergency services, 988, or go to the nearest ER.

  • Ask the app these concrete questions before using it: Who reviews the AI’s responses? Are licensed clinicians supervising? Is the app bound by HIPAA or other privacy rules? What are the emergency protocols?

  • Favor services that clearly include human clinicians. Apps that use AI for administrative help but pair you with licensed therapists are safer than AI-only offerings.

  • Keep records and get human documentation when you need it. For disability, FMLA, or medical leave, see a licensed clinician in-person or via a reputable telehealth provider. AI cannot legally or ethically replace a clinician’s signed evaluation.

  • Watch for worrying signs from the app: If the chatbot minimizes harm, urges risky behavior, or gives inconsistent safety guidance, stop using it and contact a human professional.

  • Know your rights and read privacy policies. If the tool isn’t covered by HIPAA, your data may be used in ways you don’t expect.

AI can expand access to general information and basic self-help tools, but it is not a substitute for licensed clinical care — especially for serious mental health conditions or crises. The relationship is limited by what you type; the technology lacks real-world clinical judgment and consistent crisis-handling; and regulators are increasingly restricting AI-only therapy for good reasons. Use these tools cautiously, verify human oversight, and when in doubt, seek a licensed clinician.