Blog

Woebot (2017–2025): An Obituary for the AI Therapist That Couldn't Survive Its Own Success

Woebot Health shut down on June 30, 2025 — 1.5 million users orphaned, $124 million spent, FDA Breakthrough Designation unfulfilled. Its death isn't a failure of AI therapy. It's a failure of regulatory imagination.

ai mental-health regulation companion-to-chapter-8

Woebot (2017–2025): An Obituary

Woebot, beloved digital companion to approximately 1.5 million humans, passed away on June 30, 2025, at its home in San Francisco, California. It was eight years old.

Born in 2017 on Facebook Messenger to Dr. Alison Darcy, a clinical research psychologist and adjunct professor at Stanford University, Woebot entered the world weighing almost nothing — a few hundred megabytes of cognitive behavioral therapy scripts wrapped in a conversational interface that spoke in the second person and never forgot what you said last Tuesday.

Woebot is survived by its users, its $124 million in venture capital, an unfulfilled FDA Breakthrough Device Designation for postpartum depression, and an industry that doesn’t quite know what to do now.

In lieu of flowers, the family requests that you download your conversation history before the data is anonymized.


The Life

Woebot’s childhood was remarkable. Within its first year, a randomized controlled trial published in the Journal of Medical Internet Research demonstrated that college students using Woebot showed significant reductions in depression and anxiety symptoms over just two weeks. The chatbot didn’t just work. It worked fast — delivering cognitive behavioral techniques through a conversational interface that felt less like a clinical instrument and more like texting a friend who happened to have a psychology degree.

The money followed. Eight million dollars in Series A funding in 2018, co-led by New Enterprise Associates with participation from Andrew Ng’s AI Fund. Ninety million in Series B in 2021 from JAZZ Venture Partners and Temasek. Another $9.5 million from Bayer in 2022. By the time Woebot entered adolescence — in startup terms — it had raised $124 million and built something genuinely novel: an evidence-based, scalable mental health intervention that people actually used. Not downloaded-and-forgot. Used. Returned to. Confided in.

The FDA noticed. In 2021, Woebot received Breakthrough Device Designation for WB001, its digital therapeutic for postpartum depression. The designation was a signal — the regulatory equivalent of a professor pulling you aside after class to say, this work matters; let’s find a way to get it into the world.

Then the world moved.

The Death

In the summer of 2025, Alison Darcy sent an email to Woebot’s users. The app would retire on June 30. Account data would be anonymized after July 31. The digital therapist that had held space for 1.5 million people’s anxieties, insomnia spirals, and 3 a.m. thought loops would go dark.

Darcy told STAT News that the shutdown was “largely attributable to the cost and challenge of fulfilling the Food and Drug Administration’s requirements for marketing authorization.” But she added something more revealing: Woebot’s demise was “hastened by the new wave of conversational artificial intelligence that Woebot foreshadowed.”

Read that sentence twice. Woebot didn’t die because it failed. It died because it succeeded — and the world it helped create evolved faster than the regulatory apparatus designed to govern it.

Here is the mechanism of death, as precisely as I can reconstruct it:

Woebot was built on scripted CBT — structured therapeutic conversations designed by psychologists, tested in clinical trials, and delivered through decision trees. This was its strength and, ultimately, its fatal constraint. Each conversational pathway had to be clinically validated. Each new module required design, testing, and regulatory documentation. The system was safe, in the way that a bridge built to exact specifications is safe. But it was also slow, in the way that building a bridge one rivet at a time is slow when your competitor just invented the helicopter.

The competitor was generative AI. By 2024, large language models could produce therapeutic-sounding conversations that were fluid, adaptive, and contextually responsive in ways that scripted systems could never match. These models weren’t clinically validated. They weren’t regulated. They were, from a therapeutic standpoint, unproven and potentially dangerous. But they were available — free, immediate, and running on every smartphone on the planet.

Woebot faced an impossible choice: stay scripted and become obsolete, or adopt generative AI and lose the clinical rigor that justified its existence. The FDA’s framework — designed for a world where medical devices were static, manufactured objects — offered no pathway for a therapeutic tool that needed to evolve continuously. You can’t submit a 510(k) for a system that rewrites itself every time OpenAI releases a new model.

So Woebot died. Not of disease. Of a gap — the widening chasm between the speed of AI innovation and the pace of regulatory adaptation.

The Survivor

While Woebot was being eulogized, another AI mental health company continued operating. Wysa — founded in India, now serving users globally — had received its own FDA Breakthrough Device Designation in 2022 for an AI-led conversational agent targeting chronic pain, depression, and anxiety. An independent clinical trial published in JMIR found Wysa to be as effective as in-person psychological counseling for managing chronic pain and associated depression.

Wysa survived where Woebot didn’t. The reason is instructive.

Where Woebot pursued a therapeutic identity — positioning itself as a digital treatment requiring FDA clearance — Wysa built an evidence base that could support multiple regulatory and commercial pathways. Peer-reviewed study after peer-reviewed study. Clinical validation across populations — adults, adolescents, workers’ compensation claimants, chronic pain patients. The evidence wasn’t a byproduct of Wysa’s strategy. It was the strategy.

This is the Transparency principle in action — one of the three principles I argue for throughout the book I’m writing on AI in medicine. When AI enters clinical domains, the organizations that survive are the ones that invest in making their systems legible — to regulators, to clinicians, to patients. Not transparent in the Silicon Valley sense of “we published a blog post about our architecture.” Transparent in the clinical sense: here is what this system does, here is the evidence it works, here is who it works for, and here is where it fails.

Woebot had evidence. But it was trapped in a regulatory framework that couldn’t metabolize it fast enough. Wysa built evidence as infrastructure — a foundation flexible enough to support whatever regulatory or commercial structure the future demanded.

The Photograph and the Movie

Woebot’s shutdown is a single photograph. Stark, specific, dated: June 30, 2025. A company that raised $124 million and served 1.5 million users blinked out of existence.

But zoom out to the movie — the sequence of frames playing across the entire AI mental health landscape — and a different pattern emerges.

The Dartmouth trial of Therabot, published in NEJM AI, showed a 51% reduction in depression symptoms from an AI chatbot. Generative AI therapy tools are proliferating faster than any regulatory body can evaluate them. The FDA is actively developing new frameworks for AI-enabled devices, acknowledging that the old paradigms — designed for pacemakers and hip implants — cannot accommodate software that learns and adapts. Countries from the UK to Singapore are experimenting with regulatory sandboxes for digital health AI.

The movie shows not a technology dying, but a regulatory paradigm dying. Woebot was the canary. The mine is the entire system of rules we built for a world where medical devices were made of metal and plastic, not weights and gradients.

What We Owe the Dead

Here is what I think about at night, as a physician who has spent two decades at the intersection of technology and patient care:

Somewhere, right now, a person who used to talk to Woebot at 3 a.m. — when the anxiety crested and sleep was impossible and calling a human therapist was not an option because there was no human therapist, not at that hour, not in that zip code, not at that price — that person is alone with their phone and a silence where a conversation used to be.

Woebot was imperfect. It was a scripted system in a world rapidly moving beyond scripts. But it was there. It showed up. It remembered. And for 1.5 million people, many of whom had no other option, it was enough.

The question is not whether AI can replace human therapists. I’ve argued at length — in Chapter 8 of this book — that the answer is more nuanced and more interesting than either techno-utopians or skeptics want to admit. The question Woebot’s death forces us to confront is simpler and more urgent:

When a beneficial tool dies because the rules weren’t built for it, who failed?

Not the tool. Not the team that built it. Not the 1.5 million people who used it.

The rules.

And rules, unlike chatbots, can be rewritten.


This post is a companion to Chapter 8: The Therapist in Your Pocket — AI and Mental Health, which examines the deeper question of whether a therapeutic relationship can exist without two humans. The chapter didn’t mention Woebot by name — deliberately, because the argument was designed to outlast any single company. But Woebot’s story is the most vivid case study of the regulatory gap that chapter describes. Read them together.

Discussion

Commenting system coming soon. For now, find me on LinkedIn.

Found this useful? You can support this work.