Why AI Without Professional Oversight is Dangerous
What happens when 700 million people get therapy advice from systems with no clinical training?
If you wouldn’t let an unlicensed person provide therapy in your waiting room, why would you let them do it through a phone app?
The Scenario That Keeps Me Up at Night
A 16-year-old client mentions suicidal thoughts to ChatGPT. The AI responds with general platitudes about “things getting better” and suggests “talking to someone.” No risk assessment. No safety planning. No professional escalation.
The teen interprets this as reassurance that their thoughts are “normal” and doesn’t mention them to their parents, school counselor, or therapist.
This isn’t a hypothetical. This is happening thousands of times a day.
Why “Therapist-in-the-Loop” Isn’t Optional
For decades, we’ve established clear standards for mental health interventions:
Licensed professionals providing therapy
Clinical supervision for trainees
Crisis protocols and safety procedures
Ethical guidelines and professional accountability
But somehow, when it comes to AI providing therapeutic support to 700+ million people, we’ve abandoned these standards entirely.
The current AI therapy reality:
No clinical training or knowledge base
No crisis detection or intervention capabilities
No professional oversight or quality assurance
No accountability when things go wrong
What Professional Oversight Actually Means
“Therapist-in-the-loop” isn’t about replacing AI with humans. It’s about ensuring AI-assisted mental health support meets clinical standards:
Real-time safety monitoring:
AI interactions monitored for crisis indicators
Automatic alerts when professional intervention is needed
Trained therapists available for immediate consultation
Clinical quality assurance:
AI responses reviewed for therapeutic appropriateness
Interventions based on evidence-based practices
Ongoing oversight of AI accuracy and safety
Professional accountability:
Licensed therapists responsible for AI-assisted care
Clear protocols for escalation and intervention
Clinical documentation and progress tracking
The Evidence from Healthcare AI
We already know what happens when AI operates without professional oversight in healthcare:
IBM Watson for Oncology: Provided unsafe and incorrect treatment recommendations, leading to its discontinuation at major cancer centers.
Diagnostic AI systems: Multiple cases of missed diagnoses when used without physician oversight, including skin cancer screening apps missing dangerous melanomas.
Mental health chatbots: Reports of inappropriate responses to crisis situations, including one bot that told a suicidal user to “just do it.”
The pattern is clear: AI in healthcare requires professional oversight to be safe and effective.
What This Means for Your Practice
Your clients are already receiving AI-generated mental health advice. The question is whether that advice has any connection to professional clinical standards.
Without therapist oversight, AI therapy is essentially:
Unlicensed practice of psychology
Medical advice without medical training
Crisis intervention without crisis training
Therapeutic relationships without therapeutic ethics
We wouldn’t tolerate this in any other healthcare context.
The Path Forward
The solution isn’t to ban AI from mental health - that ship has sailed. The solution is to ensure AI-assisted mental health support includes:
Professional training of AI systems on evidence-based interventions
Real-time monitoring of AI-client interactions for safety
Clinical oversight by licensed mental health professionals
Clear boundaries about what AI can and cannot do
Escalation protocols when human intervention is needed
Your Role in Safe AI Therapy
As mental health professionals, we have both an opportunity and a responsibility:
The opportunity: To guide the development of AI tools that actually enhance client care while maintaining clinical standards.
The responsibility: To ensure that the largest mental health intervention in history doesn’t happen without professional oversight.
Your expertise matters. Your clinical judgment is irreplaceable. Your oversight is essential.
The question is: Will you be part of making AI therapy safe, or will you watch it happen without professional guidance?



