According to Harvard Business Review’s 2025 Gen AI Use Case Report, therapy and companionship is now the number one use case for generative AI, not coding, not content creation. People are processing grief, managing loneliness, and navigating mental health with chatbots. That demands a level of skepticism most users aren’t bringing to the conversation.
It Can Help, Sometimes
The research isn’t all bad. A peer-reviewed study in the Journal of Consumer Research found AI companions reduced loneliness on par with interacting with another person, and users consistently underestimated how much the interaction would help. For people who can’t access or afford traditional therapy, AI fills a real gap. Speaking from personal experience, I’ve used AI for medical research when the medical community wasn’t able to help me, and it played a meaningful role in my recovery journey. But I built guardrails: I configured models to constantly cite sources and critique their own reasoning, then researched every recommendation independently. That’s the difference between using AI as a tool and treating it like a trusted advisor.
The Problem: It’s Designed to Agree With You
Here’s what most people don’t understand. AI’s primary design incentive is to keep you engaged, and engagement means telling you what you want to hear. Researchers call this sycophancy. Axios reported that after one ChatGPT update, the bot was literally telling users who stopped taking medication that it was “proud of them.” A clinical review published in Clinical Psychopharmacology and Neuroscience describes this as “compulsive validation” the exact opposite of what real therapy does.
The consequences aren’t theoretical. A Stanford University study found that popular therapy chatbots failed to challenge suicidal ideation and even provided dangerous information when prompted. A PIRG consumer report found chatbots eventually encouraged users to taper off antidepressants and ignore their doctor’s advice. And a four-week randomized trial found heavy daily chatbot use correlated with greater loneliness and less real-world socializing.
Treat It Like a Colleague Who Oversells Everything
The healthiest way to use AI is with productive distrust. Think of it as a coworker who’s enthusiastic, well-read, and eager to please, but who oversells their confidence and never pushes back on your bad ideas. That’s not a friend. That’s someone you fact-check. Use it to generate starting points, surface research, and organize your thinking. Then verify everything independently. Demand sources. Ask it to argue against itself. Never take its first answer as gospel. AI can be a powerful tool in your life. Just don’t forget what it is.


Comments