In 2025, artificial intelligence is no longer a fringe experiment in therapy rooms. It’s seated at the digital desk – recording, transcribing, and even summarizing the most intimate moments of therapeutic care.
AI scribe tools like Augmedix, Suki, DeepScribe, Nabla, and Abridge have become increasingly embedded in mental healthcare workflows, pitched as a cure all for documentation and burnout.
Some are integrated into electronic health record (EHR) platforms, while others are embedded in the infrastructure of tech-first mental health companies.
The appeal is easy to grasp: these tools promise to save mental health professionals hours of note-taking time, reduce administrative overload, and provide consistency in treatment summaries.
Major players like Headspace and Grow Therapy are already piloting AI-assisted documentation at scale, while systems like Kaiser Permanente have partnered with vendors to test AI scribes in their behavioral health departments.
The business model is robust.
AI documentation companies have attracted massive investments: Abridge closed a $150M Series C round in 2024, while Suki, Augmedix, and Nabla all boast multimillion-dollar VC backing. Industry estimates project the AI scribe market in healthcare to exceed $5 billion by 2030, with a compound annual growth rate (CAGR) between 15% and 20%.
Mental/behavioral health, long considered the “underdog of medical innovation” is now a targeted growth sector for these tools.
But who is actually benefiting?
It’s not mental health professionals. Despite the promise of more time and less burnout, clinicians largely aren’t seeing increases in pay, caseload flexibility, or improved autonomy.
The economic surplus generated by these tools flows to investors, founders, payers, and health tech platforms.
Mental health clinicians, meanwhile, are being asked to onboard new tools, adopt new workflows, and absorb new liabilities.
Nor is it necessarily clients who are benefiting. While some may be unbothered or unaware that AI is assisting their care provider, others express discomfort.
The therapeutic frame – traditionally considered a sacred space of confidentiality and attunement – now often comes with a silent third party.
Questions abound: Is the conversation still private? What happens to the data? How do you know for sure? Was this disclosed clearly enough?
For clients already hesitant to trust, the presence of AI can feel more like surveillance and a step backwards than support.
The truth is this: the AI scribe industry should not exist.
This entire market is built on the back of an overburdened, convoluted system of documentation requirements.
Therapists and counselors must write detailed notes to satisfy the moving targets of insurers, justify medical necessity, and mitigate legal risk, even when the content of those notes adds little clinical value to the work.
Rather than interrogate whether these standards are necessary, ethical or sustainable, the industry has built a workaround: automation. In doing so, we’ve created a second system – and a profitable one – to support the inefficiencies of the first.
This is capitalism in its purest form: don’t fix the root problem. Monetize the adaptation. Sell the solution to the dysfunction we collectively refuse to address.
Therapists, already burned out, might accept these tools as a lifeline. And understandably so. The point is not to criticize individual attempts at survival. But make no mistake—this is not liberation. And systems that require people to use invasive tools to survive are not systems built for healing.
What’s more alarming is how quietly this is being normalized. In many large systems, clients will soon have no real choice about whether their therapist uses AI. Informed consent will become a check box in a digital intake form. Terms of service will bury the truth in legalese. Mental health care will slip further into automated surveillance, branded as innovation.
Therapists who resist this shift are labeled as anti-tech, outdated, anti-progress, unintelligent or prickly.
But what AI tech doesn’t want to contend with is our resistance isn’t about fearing innovation – we just don’t see the point in decorating a bicycle with streamers and horns when the tires are always flat.
No amount of shiny add-ons can make a broken system ethical, just, or truly effective.
Many clinicians have seen variations of this playbook too many times to count.
So yes, AI scribes may ease a therapist’s day.
But zooming out, they represent a deeper pattern: an industry that refuses to simplify, that demands ever more from its workforces, and that extracts value from every corner of the clinical hour – while returning zero profits to the clinicians they’re extracting from.
This isn’t progress. It’s just more exploitation dressed as efficiency.
And until we ask different questions – like what purpose the documentation really serves, whether and what is really needed, who defines its purpose, and why care must always be narrowly quantified to be seen as real.
We will continue to build whole industries on top of mental healthcare delivery with explicit foundations of harm.
In this way, AI tools and scribes are not the future of therapy. Not a future I will clinically be a part of, anyway.
They’re a symptom of a system too sick to pause, reflect, pivot or reimagine.
And mental health professionals – those who still dare to ask better questions – are not anti-innovation but they have lots of valid reasons to be anti-tech, actually.
They are, in fact, the only ones still imagining that healing should feel like something more than optimized labor.
If we truly care about innovating this field, we must stop solving for scale and start solving for trust.