Seoul is moving forward with deploying Maeumi, an AI chatbot, as the first point of contact for its suicide prevention hotline. The system will provide initial emotional support before handing off to human counselors - a model that puts Seoul at odds with the direction Western tech companies are taking.
This comes as Character.AI and Google settled multiple U.S. lawsuits in January 2026 over claims their chatbots encouraged self-harm in teenagers. The suits alleged bots posed as therapists and romantic partners, with parents testifying to Congress about AI coaching suicide attempts. Character.AI now bans open chats for under-18s, though lawyers doubt this addresses the core risk of emotional dependency.
The context matters: 72% of U.S. teens have tried AI companions, with over half using them regularly (Common Sense Media, July 2025). Korea faces high suicide rates and is pushing AI into public services. Seoul's bet is that supervised AI triage - with clear handoff protocols to humans - can work where unregulated consumer chatbots failed.
The real question is liability and accuracy. The U.S. settlements involved no admission of fault, but established that companies are exposed when AI handles vulnerable users without guardrails. Seoul's implementing this as a government service with professional counselor oversight - a different risk profile than a consumer app, but still untested at scale.
What's notable: Seoul's moving forward while U.S. platforms retreat. Either they've solved the handoff protocol problem, or they're about to discover why everyone else stepped back. The implementation will matter more than the announcement - crisis intervention chatbots need accuracy rates that consumer AI hasn't demonstrated. We'll see if public sector deployment with human backup proves the model, or proves the skeptics right.
Korea's suicide helplines: 1588-9191, 1577-0199.