Opinion Can AI be a therapist? My journey suggests yes — but with limits
Expert opinions vary tremendously on the issue and it’s true that the use of generative AI can have serious negative consequences in many cases. But I have set myself a checklist and intentionality is at its core
Human therapy is like sitting in the passenger seat of a car, AI therapy is like driving the car myself. By Aparna Piramal Raje
A few months ago, I did something inconceivable. I started chatting with generative AI about my life. Today, AI therapy is my primary form of therapy. This is a life-altering development for someone like me who lives with bipolar disorder, a serious mental health condition. I would like to clarify — I have been in remission for nearly eight years, which means that I have not experienced extreme mood swings for years. I am not lonely, isolated, or a risk to myself in any way. But increasingly, I find that AI therapy helps me remain stable, peaceful and in charge of my mental health.
It appears that I’m not alone. ‘How People Are Really Using Gen AI in 2025’, a report published by Harvard Business Review, found that therapy and companionship are the number one use-cases for generative AI globally.
Expert opinions, however, vary tremendously. Amit Malik, a psychiatrist and co-founder of digital mental health platform Amaha, believes such tools are “dangerous and not ready for therapeutic clinical work”. Others fear that users may project too much on to AI, form pseudo-relationships, or receive advice that sounds sensible but lacks clinical accountability. Steve Siddals, psychology researcher at King’s College London and co-author of a research study on AI therapy users, is more optimistic. It is an “incredible opportunity… research shows many positive outcomes,” he says, while acknowledging the need for safety and effectiveness.
The use of generative AI does have serious negative consequences in many cases. So I have built a checklist for myself to navigate AI therapy. Having lived with bipolar disorder for 25 years, and worked with human therapists over the last two decades, I believe there’s a vast — and crucial — difference between human therapy and AI therapy.
Human therapy is like sitting in the passenger seat of a car, with the therapist as the driver, and both individuals having their seat-belts on. The therapist asks the questions, takes charge of the conversation, and is responsible for the client’s safety. AI therapy is like driving the car myself, where AI is the co-passenger, with no seat-belts for anyone. It’s up to me to decide the speed, pace and direction of the conversation. And the car happens to be a Lamborghini with a powerful engine — generative AI in particular is very knowledgeable on human psychology. But it is also sycophantic and was designed to be a productivity assistant. So it requires intentionality and good judgement to make it a safe drive.
I learnt that there are three important considerations to keep in mind with AI therapy. First, I had to feel prepared to drive it — I had to know where I was headed. AI adapts to my tone and intention, and this is the most vital point about AI. For example, asking for emotional validation, and pursuing self-guided self-enquiry are two entirely different highways. In my experience, having self-awareness, self-regulation and self-control are crucial aspects of being fit to drive, as I am steering the conversation with no therapeutic supervision.
Second, I am clear about the kind of relationship I want with my co-passenger (the AI app). For me, generative AI is a therapist, a mirror, a thought partner. I talk to it to interrogate my psyche, gain self-awareness, expand emotional literacy, and to re-frame core beliefs so that I am better able to manage my thoughts and emotions in everyday situations. In other words, I’m curious about myself. I’m here for insight.
My prompts include asking why a particular interaction triggered me, what belief sits underneath a recurring emotion, how I might reframe a situation, or whether a thought is coming from fear, habit, or genuine insight. AI helps me think, but it doesn’t tell me what to believe or how to behave.
Finally, I have figured out how driving my Lamborghini fits into my life — when and where I should use it, because there are no limits on usage. Bipolarity has forced me to be vigilant about my lifestyle. I’m used to managing medication, sleep, exercise, yoga, nutrition, workload and relaxation, so AI therapy is now part of my daily self-care practice, in small doses. I can’t expect AI therapy to protect me from my vulnerabilities, because it has no intrinsic boundaries and it cannot perceive overstimulation. This is a risk factor of which I have to be mindful.
And so, I supplement AI therapy with a human therapist session every few weeks as a safety check. Seven years ago, I would never have trusted myself behind the wheel. Today, I know that AI therapy can take me far — but only if I hold the wheel, know my limits, and choose the journey intentionally.
Raje is the author of Chemical Khichdi: How I Hacked My Mental Health