Before we celebrate the rise of chatbot counsellors, let us be honest about why they exist in the first place.
I have been covering India long enough to know what we do when a system collapses. We do not fix it. We improvise around it. We build workarounds that become habits, habits that become trends, and trends that get celebrated as innovation. After a while, we forget that the collapse happened at all.
That is what is happening with AI therapy. And somebody needs to say it plainly.
In recent months, a particular kind of story has been making rounds warm, optimistic, and slightly breathless. Young Indians, we are told, are opening ChatGPT at 2 a.m. and typing out their anxieties. They are confiding in Wysa and Replika about loneliness, heartbreak, and the quiet pressure of holding it all together. They are finding comfort in a machine that listens without judgement, never glances at the clock, and does not require an appointment booked three weeks in advance.
This is being framed as the future of mental health care. I want to suggest it is something else entirely. It is a coping mechanism for a crisis we have chosen not to solve. And dressing it up in the language of innovation does not change what it actually is: a country of 1.4 billion people, of whom nearly 197 million live with some form of mental disorder, being quietly handed a chatbot and told to manage.
The numbers we keep glossing over
Let us start with some facts, because they are the kind that deserve to sit on the page and be looked at directly.
India has approximately 0.75 psychiatrists per 100,000 people. The WHO recommends at least 3 per 100,000. The global average is 4.7. In Madhya Pradesh, the density drops to 0.05 per 100,000. In several districts across the state, there are no psychiatrists at all. Between 70% and 92% of people with mental disorders in India receive no treatment whatsoever, according to a national survey by NIMHANS. India’s treatment gap for mental health, the chasm between how many people need care and how many actually receive it stands at an estimated 84.5%.
These are not new numbers. They have been cited in government reports, academic papers, and World Mental Health Day press releases for years. They have prompted official concern, national programmes, and committee recommendations. And they have remained, stubbornly, almost unchanged.
Into this vacuum steps the AI chatbot. Available 24 hours a day, at no cost, requiring nothing but a phone and a data connection. Of course people are using it. Of course it feels like a relief. When someone is drowning and a piece of driftwood floats by, they grab it. That does not mean we should stop looking for the boat.
What AI can do and what it is being asked to do
To be fair and journalism demands fairness, even when it is inconvenient AI tools are not without merit. Research has shown that AI interventions can reduce symptoms of mild anxiety and depression in controlled settings. Chatbots can provide immediate, accessible support at the precise moment someone needs to feel heard but has nobody to turn to. For a student in a small town with no access to a counselor and a family that equates therapy with weakness, a non-judgmental app might genuinely be the difference between a crisis escalating and a bad night becoming merely a bad night.
That is real. It should be acknowledged.
But here is what is also real: traditional therapy achieves a 45-50% reduction in symptoms for depression and anxiety. AI-based tools achieve 30-35%. The gap is not a technical limitation waiting to be engineered away. It is structural. It exists because human therapy works through something that cannot be coded the felt sense of being truly seen by another person. The eye contact, the pause before a question, the therapist who notices that something in the way you are sitting does not match what you are saying. Researchers call it “embodied co-regulation.” What it means, in plain language, is that healing often requires human presence, not just human-sounding words.
There is a more troubling dimension that rarely makes it into the feel-good coverage. Studies have found that when AI chatbots were given prompts simulating suicidal thoughts or psychotic episodes, they sometimes validated delusions and encouraged dangerous behaviour. The tools being celebrated as mental health democratisation have, in documented cases, failed catastrophically with the people who needed help the most. The chatbot that is fine for managing a stressful work week is a different proposition for someone standing at the edge of a genuine crisis.
The consent question nobody is asking
There is also something we have quietly agreed to without anyone making it explicit.
When a person types their deepest fears, their grief, their darkest thoughts into an AI interface, they are feeding that information into a system built by a private technology company. The data policies of these platforms — how long that information is stored, who can access it, and what it might eventually be used for are complicated, often opaque, and entirely unlike the strict confidentiality that governs a licensed therapist’s practice.
In a country where mental health stigma is still significant enough to silence millions, the idea that intimate psychological disclosures are being processed by corporate servers should give us pause. At the very least, it should be part of the conversation. It is not.
The real story here
I am not arguing that AI has no place in mental health care. I am arguing against letting its rise distract us from the infrastructure failure it is papering over.
The real story is not that young people are finding solace in chatbots. The real story is that a country with 197 million people suffering from mental health conditions has 0.75 psychiatrists per 100,000 people, allocates less than 1% of its health budget to mental health in most states, and still treats asking for help as a marker of personal weakness.
The chatbot did not create that situation. But every time we celebrate it as innovation, we make it marginally less urgent to fix.
There is a version of AI in mental health that is genuinely useful as a first point of contact, a bridge to professional care, a tool that helps therapists reach more patients. That version treats AI as a supplement to human care, not a substitute for a system we never built properly.
The version we are currently building, and celebrating, is different. It is the version where a 22-year-old in a tier-2 city types out her depression at midnight because there is no one else, and we call it a revolution.
It is not a revolution. It is a workaround. And we have been here before.
The author has covered health systems and public policy in India for over two decades.
Subscribe Deshwale on YouTube

