Dark Mode Light Mode

The Uneasy Truth About AI in Mental Healthcare

Despite the lack of meaningful evidence, these products keep being sold. Promised. Funded. Because the real goal is not healing or liberation. It’s productivity.
Opinion The Uneasy Truth About AI in Mental Healthcare | foorum Insider Opinion The Uneasy Truth About AI in Mental Healthcare | foorum Insider
Opinion The Uneasy Truth About AI in Mental Healthcare | Brittainy Lindsey, LMHC

It makes perfect sense to feel uneasy about AI, especially in mental health.

There’s a growing push, loud and well-funded, to replace not just tasks but entire professions with artificial intelligence.

High-profile voices like Bill Gates are already saying it out loud: clinicians and teachers will be replaced in less than 10 years.

Advertisement

This prediction is being marketed to us as progress. As innovation. As inevitable. But this isn’t progress. It’s erasure.

Not just of jobs, but of people – those grounded in anti-oppressive practice, cultural humility, and real relationships rooted in wellness and community care.

People whose presence, attunement, and humanity are the work.

To replace that with algorithms isn’t just short-sighted, it’s a fundamental misunderstanding of what it means to help, to heal, or to hold space for one another.

Healing work is not the exchange or download of data.

It’s not about inputting symptoms and outputting a score, a diagnosis, or a treatment plan. Yes, data exchange can be helpful.

For some, especially in moments of overwhelm or disconnection, tracking emotions or accessing structured tools may offer comfort, insight, or a sense of control.

That’s valid. But we have to name the risk and the elephant in the room: swinging open the door to AI-based “care” for marginal-at-best benefits, while simultaneously continuing to defund or devalue human relational care, is not just misguided, it’s dangerous.

Yet we’re being conditioned to accept that this deeply human work can now be done by a chatbot or other tools.

That a screen can hold what only another living person can. That reducing the labor force to machines is an acceptable cost in the name of profit and scale.

That somehow it’s “better” for us.

But the truth is: in the past five to eight years of rapid tech development in mental health, there is no evidence of broad, sustained, systemic improvement in well-being for any population using digital mental health tools alone.

The research consistently shows that:

  • Any improvements are short-term and superficial.
  • Positive outcomes depend heavily on the presence of human support (Baumel et al., 2019).
  • These tools are far less effective for people with high distress or complex trauma (Lattie et al., 2019).
  • The people who could benefit most, those facing structural oppression, poverty, or crisis, are the least likely to have access (Graham et al., 2020).

Despite the lack of meaningful evidence, these products keep being sold. Promised. Funded. Venture capitalists love them. Why?

Because the real goal is not healing or liberation. It’s productivity.

That’s been the core agenda of Western, colonized mental healthcare from the very beginning.

It’s just taking its latest form now: sleek, scalable, data-driven, and quietly extractive.

These tools are designed, in large part, to reduce absenteeism, lower healthcare costs, increase employee engagement, and manage emotional labor more “efficiently.”

This is not healthcare. It’s workforce management.

Mental health is being reframed as a productivity issue rather than a structural, existential, or societal one.

We are being given tools to cope with toxic systems, not tools to transform them.

The message is clear: you don’t need rest or justice or community, you need a mood tracker and a guided meditation so you can get back to work.

All of this is being driven by an ideology that values optimization over presence, convenience over connection, and profit over people.

It repackages “wellness” in ways that serve systems, not individuals.

And it creates a dangerous illusion: that what we need is a better app, when what we actually need is to invest our money, intelligence, and imagination into building a more humane world.

One that serves, uplifts, and liberates even our most disenfranchised people and communities.

So if you’re feeling unsettled, skeptical, even angry about the rise of AI in spaces like mental health, good.

You should.

That means you’re paying attention.

Because here’s the deeper truth that keeps me up at night: we are approaching a future where human-to-human delivered therapies,

and actual holistic, liberatory-oriented innovation, increasingly become either a luxury product for the privileged,

or disappear altogether, replaced by cheaper labor and scalable technologies.

More and more clinicians are leaving traditional practice settings.

They’re beyond burned out. Underpaid. Morally compromised by the endless demands of managed care.

Those who can afford to open private-pay practices show up on social media and other spaces to reclaim their autonomy, preserve their ethical integrity, and practice on their own terms.

But the result is a system where the only people consistently accessing relational care are those with the financial privilege to pay for it, often already positioned by race, class, or social capital to navigate wellness spaces more easily.

And the clinicians providing that care often share in that same privilege, sometimes without fully seeing how these shifts may unintentionally reinforce a wider divide.

Many are doing what they need to do to survive, to heal, to build lives of meaning and sustainability, and that matters.

But we can hold that truth alongside another: that even well-intentioned decisions, when shaped by individualistic ideals and a scarcity-driven system, can deepen inequities we didn’t create but are still part of.

AI is positioned to further deepen those inequities.

AI in mental health is already helping some people.

It may increase access to certain tools, offer support in moments of disconnection, or fill short-term gaps where human care is unavailable.

That’s not nothing. But we have to zoom out and see the forest for the trees.

The larger trend isn’t one of expanded equity, it’s one of deepening divide.

Mental healthcare has never been a highly profitable space.

It has historically depended on community, state, and federal resources to support its work, and those supports have been neglected, underfunded, or deliberately dismantled for decades.

The pivot to AI doesn’t repair this abandonment. It simply builds a shinier structure on top of it.

And while the tech itself may not claim to solve everything, the money, attention, and institutional support it attracts, often from oligarchs, venture capitalists, and corporate healthcare players, tells us everything about where the true investment is going.

Not into community-rooted healing. Not into liberation-focused care. Not into true systems transformation.

But into the next scalable product that promises efficiency over justice, and optimization over humanity.

Insurance networks are shrinking. Reimbursements are stagnant or declining.

Fewer therapists are willing, or able, to stay in the system. Transparently, it’s hard even to be on the periphery of it all. 

And this is exactly the opportunity mental health tech platforms, insurer conglomerates, and others with deep pockets have been waiting for.

It’s not hard to envision what happens next:

  • Health plans begin to cover only AI-driven or digital mental health services.
  • Human therapists become out-of-network exceptions.
  • Therapy becomes something you save for, like a luxury spa treatment or a coaching package.
  • Entire populations, especially BIPOC, disabled, and working-class people, are routed into chatbot care and behavior-tracking apps.
  • Wellness hubs replace community mental health centers, offering branded kiosks with guided meditations and AI check-ins instead of human care.
  • Therapists are increasingly employed behind the scenes to train AI and monitor user data, while losing opportunities to practice relationally.
  • Insurance companies and employers begin requiring mental wellness compliance through biometric data, journaling metrics, or stress dashboards to access care or maintain employment.
  • Predictive mental health algorithms are used to flag, surveil, or institutionalize individuals deemed too risky, particularly in marginalized communities.
  • Public trust shifts toward therapy influencers and AI tools that align with marketable, scalable language, while community-rooted clinicians are rendered invisible.

We are building a two-tiered system in real time…

Relational care for the few. Scalable digital containment for the many. And once this shift becomes the norm,

…undoing it will be incredibly difficult.

The decisions being made right now, by investors, insurers, and policymakers, are laying the foundation for a future where mental healthcare is increasingly defined by data points, not relationships.

A future where the richest individuals and institutions co-opt mental healthcare as a series of measured outcomes,

a checkbox of indications that we are successfully coping with a broken world, society, and systems, to better serve the smooth functioning of capitalism.

A future where true connection is reframed as inefficient, sentimental, or unnecessary, a luxury rather than a basic human need.

We don’t just lose jobs in this future. We lose the very spaces where healing, reflection, and transformation happen.

We lose the sacred act of being seen, heard, and held by another human being.

We don’t need more tech to help us tolerate systems that are harming us.

We need new systems entirely, built on justice, interconnection, and care. And if we can leverage AI to get us there?

Great. So, who is doing that?

Because most of what we are seeing is not liberatory. It is extractive.

It is not about freedom. It is about control.

And it is not about care. It is about profit.

So yes, be skeptical. Be uneasy. Ask better questions. Refuse to accept scalable solutions that leave the most vulnerable behind.

Because what we need is not a faster way to optimize the human mind. We need a deeper commitment to the dignity of what it means to be human in the real world.

If you’re in a position to shape policy, invest in solutions, or advocate within your company, start by asking: Where is the human in this system?

Baumel, A., Muench, F., Edan, S., & Kane, J. M. (2019). Objective user engagement with mental health apps: Systematic search and panel-based usage analysis. Journal of Medical Internet Research, 21(9), e14567.

Lattie, E. G., Adkins, E. C., Winquist, N., Stiles-Shields, C., Wafford, Q. E., & Graham, A. K. (2019). Digital mental health interventions for depression, anxiety, and enhancement of psychological well-being among college students: Systematic review. Journal of Medical Internet Research, 21(7), e12869.

Graham, S., Depp, C., Lee, E. E., Nebeker, C., Tu, X., Kim, H. C., & Jeste, D. V. (2020).
Artificial intelligence for mental health and mental illnesses: An overview.
Current Psychiatry Reports, 22, Article 44.

Author

  • Brittainy Lindsey (She/Her) | foorum Insider

    Brittainy Lindsey, LMHC, is a writer, licensed mental health counselor, and advocate for systemic change in healthcare. With over 15 years of experience in mental health—from direct practice in community clinics, rural school districts, and private practice to roles in major payers, clinical operations, consulting, quality improvement, and disability evaluation—she creates space for clinicians to unlearn systemically-driven burnout culture and reconnect with their own pain, humanity and healing. She writes at Healing from Healthcare, has an active LinkedIn community and facilitates peer support sessions for providers navigating the harms of the system. Her work has been featured in publications such as ProPublica and MindSite News.

    View all posts

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
View Comments (1) View Comments (1)
  1. I really like this article and agree that big money and corporate control over AI within the human wellness/counseling space is a bad idea and can lead to suboptimal results for the people involved. I believe there should be a distinction between AI as a whole and the type of corporate control and development of AI in this space. There are pathways for AI to be extremely helpful in the counseling space without it being the type of AI controlled by the big money. It doesn’t have to be all or nothing.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
What I Saw in the Delivery Room Changed How I See America | foorum Insider

What I Saw in the Delivery Room Changed How I See America

Next Post
Sleep Expert Reveals The Misconceptions Around Sleep | fooruminsider.com

Sleep Expert Reveals The Misconceptions Around Sleep: Win Hansen

Advertisement