Dark Mode Light Mode

What Are We Really Building? AI, Capitalism, and the Future of Mental Healthcare

When we talk about AI in mental healthcare, we need more than innovation, we need honesty. This piece explores how systemic forces, not just technological ones, are shaping the future of care delivery in mental health.
What Are We Really Building AI, Capitalism, and the Future of Mental Healthcare | foorum Insider What Are We Really Building AI, Capitalism, and the Future of Mental Healthcare | foorum Insider
Visual Representation of Human Connection with A Human Therapist in Colorful Illustrative Style | By Lu Creates

Part One: The Questions We’re Not Asking

I’ve been in mental health long enough to know that change is constant and that buzzwords like “innovation” and “efficiency” often signal something else entirely.

So when people talk about AI in mental healthcare, my first instinct isn’t panic. It’s curiosity. What are folks really building? Who is it for? And who is not benefiting, or even being harmed in the process?

Advertisement

I’m not here to shame individual clinicians who are consulting with tech firms, partnering on platforms, or using AI to streamline parts of their practice.

Many of us are doing what we can to survive broken systems. In fact, I trust that most licensed professionals, especially those with lived experience and deep relational ethics are guided by intuition, integrity, and a strong harm-reduction lens.

Customized individual applications by licensed professionals is not my biggest concern.

My concern is with the larger machine that continues to incessantly cause harm: the intersection of venture capital, the healthcare industrial complex, and AI development that is accelerating largely without sufficient ethical scrutiny and certainly without attempt at clinician consensus.

It’s telling that in the midst of a global mental health workforce shortage, we’re not investing in fair pay initiatives, scholarships to invigorate a skilled mental health workforce, or insurance panel access for graduate-level or associate-level clinicians.

We’re bypassing skilled human relational work entirely—shifting attention to apps, chatbots, health coaches, and psychoeducation platforms.

This isn’t a dig at those models but it is a reflection of how labor gets devalued when profit is the driving force, which is a core tenet of capitalism, alongside cheapening labor costs.

Here’s the uncomfortable truth: highly skilled therapeutic labor doesn’t scale well.

Therapy doesn’t scale well across populations, and it barely scales at the individual level.

Most clinicians working in practices, agencies, or large healthcare systems are forced to see six, seven, sometimes eight or more clients per day just to meet productivity standards.

As you might imagine, this isn’t sustainable. It’s exploitative. And it’s also evidence of how even human-centered care has been funneled into assembly-line methodologies of labor and warped by the market logic of the industrial healthcare machine.

So when the tech industry says,

“AI shouldn’t replace therapists,” I can’t help but hear it as a deflection.

Lots of things in healthcare “shouldn’t” happen. But they do because they serve the logic of speed, scale, and reduced labor costs.

If we’re honest, the value proposition of AI isn’t that it’s “better” than what human relational care can provide.

It’s that it’s available 24/7 and doesn’t require a salary.

It doesn’t get tired.

It doesn’t hesitate about ethical quandaries.

It doesn’t ask for better supervision.

It doesn’t ask for pay increases.

And it definitely doesn’t unionize.

So, let’s not pretend this is purely about access or innovation.

Let’s name it for what it is: a capital-driven move to bypass labor costs under the guise of solving a care shortage.

Part Two: The “It’s Better Than Nothing” Trope

How do we talk about simplified narratives without dismissing the way AI is already helping people?

If we are serious about integrating technology into mental health in an ethical way, we need to start by respecting the work itself.

If your tool is gaining traction, profit, or value simply because it’s the only thing available in an intentionally barren and destitute market like mental healthcare, then you have a responsibility toward restorative justice.

Do builders see it?

Do they fold it into business plans, roadmaps and forecasting?

Respecting the work also means not calling AI tools “therapists” or “counselors” or any other licensed professional title.

These titles are earned through years of education, training, supervision, and state-regulated oversight.

They are not interchangeable with an LLM, no matter how advanced.

To conflate the two is not just a marketing issue, it’s a boundary violation.

It flattens the complex, deeply relational nature of care into a set of probabilistic outputs.

It doesn’t hurt the AI—probabilistic outputs may still offer a level of assistance that is valid and useful to some end users.

But calling these assistive tools “therapy” or “counseling” hurts the people doing human-to-human work and prevents us from creating space for the healing that can only emerge through relationships.

And it undermines the very professions the tech world claims to “augment.”

We need clinicians, not just those at the top of the academic ladder, but especially Master’s level clinicians, clinicians of color, and those from historically excluded backgrounds to help shape these conversations. 

We need global, culturally aware voices to weigh in on how healing is actually resourced and delivered in the real world, not just what gets labeled “evidence-based” in Western paradigms.

We also need non-clinicians, ethicists, patients, and community members to help us reimagine how technology can serve human care without co-opting it.

But most importantly, we need to hold these conversations in full view of the systems we’re inside.

Because AI in mental health isn’t just about technology.

It’s about capitalism.
It’s about labor.
It’s about which kinds of care get invested in
and which get neglected.

So the question is not whether AI belongs in mental health care. 

The question is: can we build it in ways that restore what our systems have broken? And if we can’t do that, should we be building it at all?

Because the systems we build today decide who gets to heal tomorrow.

Author

  • Brittainy Lindsey (She/Her) | foorum Insider

    Brittainy Lindsey, LMHC, is a writer, licensed mental health counselor, and advocate for systemic change in healthcare. With over 15 years of experience in mental health—from direct practice in community clinics, rural school districts, and private practice to roles in major payers, clinical operations, consulting, quality improvement, and disability evaluation—she creates space for clinicians to unlearn systemically-driven burnout culture and reconnect with their own pain, humanity and healing. She writes at Healing from Healthcare, has an active LinkedIn community and facilitates peer support sessions for providers navigating the harms of the system. Her work has been featured in publications such as ProPublica and MindSite News.

    View all posts

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Why This Psychologist Says the Real Fake News Is in Your Head | foorum Insider

Why This Psychologist Says the Real Fake News Is in Your Head

Next Post
Trump’s “Big, Beautiful Bill” Is Big on Politics, Short on Public Health | foorum Insider

Trump’s “Big, Beautiful Bill” Is Big on Politics, Short on Public Health

Advertisement