Your Clients Are Already Using AI
A candid conversation about the ethical challenges we’re facing and why therapist oversight isn’t optional anymore
Let me start with something that might make you uncomfortable: Your clients are probably already using AI for mental health support.
Not in the future. Not hypothetically. Right now—between your sessions, when they’re struggling at 2 AM, when they can’t reach you during a crisis. They’re typing their deepest fears into ChatGPT. They’re asking Character.AI for coping strategies. They’re treating Replika like a therapist.
And here’s the thing that should really concern us: None of these systems were designed with our ethical standards in mind.
The Reality We’re Not Talking About
The research is stark. According to recent studies, 47% of AI users treat these systems as a “personal therapist,” and 43% of mental health professionals have already experimented with AI tools in their practice. This isn’t some distant future scenario—this is happening now, with or without our professional guidance.
But before you panic or dismiss AI entirely, let’s have an honest conversation about what we’re actually dealing with.
The Ethical Minefield (And Why It Matters to Your Practice)
1. The Empathy Gap
Here’s what we know as therapists: the therapeutic alliance—that genuine human connection—is often the most powerful healing force in our work. AI, no matter how sophisticated, simply cannot replicate the lived experience of sitting with someone in their pain, reading their micro-expressions, feeling that intuitive shift when something lands.
But here’s the uncomfortable truth: Many clients find it easier to open up to AI precisely because it lacks judgment and the complexity of human relationship. They’re getting some benefit, even if it’s not the same as what we provide.
2. Privacy: The Illusion of Confidentiality
When your client types their trauma history into a chatbot, where does that data go? Who has access? What happens if there’s a breach? These aren’t hypothetical concerns—they’re fundamental questions we haven’t answered as a field.
As therapists, we’re bound by HIPAA and ethical guidelines that took decades to develop. The AI systems your clients are using? Many have privacy policies that explicitly state they may use conversations for training data. Let that sink in.
3. The Complexity Problem
We all know that moment when a client reveals something that shifts the entire case—childhood trauma they’d minimized, suicidal ideation they’d been hiding, a symptom pattern that suggests something more serious. That clinical judgment, that ability to recognize when we’re out of our depth, when we need to refer, when we need to intervene immediately—that’s not something AI can reliably do.
Yet AI systems are regularly handling complex cases without any clinical supervision. Without any ability to recognize their own limitations.
4. Bias: The Invisible Harm
AI systems learn from data, and that data reflects all of our societal biases. Research shows that AI trained on biased datasets risks perpetuating discrimination, particularly against minority and vulnerable populations.
Think about your most marginalized clients—the ones who already struggle to find culturally competent care. Now imagine them receiving mental health guidance from a system that might inadvertently reinforce the very biases they’ve spent their lives fighting against.
5. The Regulation Vacuum
Here’s what makes all of this particularly dangerous: There are virtually no regulations governing AI therapy.
While we spent years in graduate programs learning ethics, getting supervised, passing licensing exams—AI therapy systems can launch tomorrow with minimal oversight. The technology is moving faster than our ability to create safeguards.
So What Do We Do?
This is where the conversation usually goes in one of two directions: complete rejection of AI (which ignores the reality that it’s already happening) or uncritical embrace (which ignores very real risks).
I want to propose a third path: therapist-led AI integration with rigorous ethical oversight.
The Non-Negotiables
Based on extensive research and ethical frameworks, here’s what AI therapy must include to be ethically viable:
1. Therapist-in-the-Loop Oversight
This isn’t optional. AI should enhance our work, not replace us. That means:
- Professional supervision of AI interactions 
- Real-time monitoring for crisis situations 
- Human override capabilities when needed 
- Clinical review of AI recommendations 
2. Radical Transparency
Clients need to know:
- When they’re interacting with AI vs. a human 
- Exactly what the AI can and cannot do 
- Where their data goes and how it’s used 
- The limitations and risks of AI support 
3. Data Protection as Sacred
We need:
- Healthcare-grade encryption and security 
- Clear informed consent processes 
- Client control over their data 
- Regular security audits 
4. Bias Monitoring and Mitigation
Ongoing work to:
- Evaluate training data for bias 
- Test AI responses across diverse populations 
- Include diverse voices in AI development 
- Regular audits for discriminatory patterns 
5. Crisis Detection and Human Escalation
Systems must:
- Identify risk indicators in real-time 
- Immediately connect clients to human professionals 
- Have clear protocols for emergencies 
- Integrate with crisis resources 
The Hybrid Model: Where We’re Headed
The future isn’t AI or human therapists—it’s AI with therapist oversight. Think of it as extending your presence and expertise, not replacing it.
Imagine:
- Your client has access to evidence-based coping skills between sessions 
- You receive alerts if they’re showing signs of crisis 
- The AI reinforces your treatment approach (because you trained it) 
- You maintain clinical oversight and decision-making authority 
Your Role in This Transformation
Here’s what I believe: The dangerous digital migration is happening whether we participate or not.
You have two choices:
- Stay on the sidelines while your clients use unregulated AI without any professional guidance 
- Step into this space and ensure that AI therapy happens with the clinical oversight and ethical standards our profession has spent decades developing 
The second option requires us to:
- Educate ourselves about AI capabilities and limitations 
- Advocate for regulation and professional standards 
- Demand therapist oversight in any AI therapy system 
- Put client wellbeing above our discomfort with technology 
The Bottom Line
Seven hundred million people are using AI for emotional support. Community mental health centers are closing. The healthcare system is failing our clients, and they’re desperately seeking alternatives.
We can’t stop the digital migration. But we can make it safe.
That means insisting on:
- Ethical frameworks that prioritize client wellbeing 
- Professional oversight, not replacement 
- Transparency, not manipulation 
- Evidence-based approaches, not experimental tech 
- Community healing, not increased isolation 
The question isn’t whether AI will be part of mental health care. The question is whether therapists will lead this transformation or watch it happen without us.
What You Can Do Now
- Educate yourself about AI tools your clients might be using 
- Have conversations with clients about their use of AI support 
- Advocate for regulation and professional standards in AI therapy 
- Demand therapist oversight in any AI mental health system 
- Stay informed about developments in ethical AI therapy 
The future of our profession depends on therapists who are willing to engage with these ethical challenges—not dismiss them, not uncritically embrace them, but thoughtfully lead the way forward.
This article draws on research from multiple sources including Stanford HAI, the National Board for Certified Counselors, the Journal of Medical Internet Research, and leading ethics researchers in AI and mental health. For a complete list of sources, please see the references section.



