As mental health needs continue to rise across the country, a new study published in NEJM AI claims that artificial intelligence could play a big role in helping people with depression.
The study showed that people who used an AI chatbot experienced fewer symptoms of depression after eight weeks compared to people who were placed on a waitlist for treatment.
At first glance, it might seem like good news.
But experts are raising serious concerns about how the study was done, what the results really mean, and whether AI therapy could end up doing more harm than good.
What the Study Did
The study looked at 181 adults diagnosed with moderate to severe depression.
Half the group used a mental health chatbot trained to offer advice using techniques like Cognitive Behavioral Therapy.
The other half didn’t get any treatment, they were placed on a waitlist.
After eight weeks, people who used the chatbot reported lower levels of depression.
The researchers say this shows how AI might help solve the shortage of therapists and make care more available to more people.
The Real Questions
The study compared AI therapy to no therapy at all. That’s not a fair comparison.
If you’re really testing how helpful AI is, you should compare it to human therapists, not to people who are getting no help.
Experts also questioned whether it’s ethical to withhold treatment from people suffering from depression just to test a new technology.
People in pain shouldn’t be part of an experiment that offers no real support. That goes against everything we believe in when it comes to healing.
A Quick Fix or Another Dependency?
There’s also a fear that AI could become another quick fix. One that doesn’t solve the deeper problems behind the mental health crisis.
In the past, new tech like chatrooms and social media were supposed to help people feel more connected. Instead, research has shown that more people than ever feel isolated, anxious, and alone.
We’re repeating the same mistake! We’re trying to replace human connection with machines, when what people actually need is real, supportive relationships.
Bigger System Problems
This study also opens up big questions about fairness. In the U.S., it can take years to become a licensed therapist, with high costs for training and education.
But AI programs aren’t held to the same rules even though they’re trying to do the same job.
If a robot can offer mental health advice without a license, why can’t a human? Are we creating a double standard?
There’s also concern that the use of AI in therapy is more about saving money than helping people thrive.
We need to ask: are we just trying to make people well enough to work and pay bills? Or do we actually want to help people grow, heal, and feel whole?
What Should Come Next?
Many agree that technology has a place in mental health, especially when there aren’t enough providers.
But using AI should be done carefully, with clear rules, strong data, and patient safety at the center.
Instead of replacing humans, AI could help support them offering reminders, notes, or follow-up messages that make care better, not colder.
We don’t just need faster care,
we need care that listens.
That sees us.
That grows with us.
You can’t code that into a machine.