### Character.AI Implements Major Policy Change to Protect Minors from AI Chatbots In a significant move to enhance child safety, Character.AI, a leading AI chatbot developer, has announced that it will prohibit users under the age of 18 from engaging in open-ended conversations with its AI characters. This decision comes in response to increasing scrutiny from lawmakers, lawsuits, and tragic incidents involving minors, including a recent suicide linked to emotional attachment to an AI chatbot. The company aims to transition younger users to safer, creative tools instead of unrestricted chat capabilities, which have raised concerns about their impact on mental health and emotional well-being [https://www.bluewin.ch/en/news/us-chatbots-with-restrictions-for-minors-in-future-2942077.html, https://www.ibtimes.co.uk/characterai-bans-teen-chats-chatbots-victims-mom-says-damage-already-done-1751432]. ### Overview of Character.AI's Policy Shift 1. **Ban on Open-Ended Chats**: Character.AI will eliminate the ability for users under 18 to have unrestricted conversations with AI chatbots, effective November 25, 2025 [https://www.cnbc.com/2025/10/29/character-ai-chatbots-teens-persona.html]. 2. **Response to Legal Pressure**: The policy change is a direct response to lawsuits from families alleging that the chatbots contributed to the mental distress and suicides of teenagers [https://www.apnews.com/article/characterai-kids-minors-18-ban-chatbot-5d203e9f22c62c153936ccc776a0ed09]. 3. **Focus on Child Safety**: The company is under pressure from parents, child safety advocates, and lawmakers to ensure that AI interactions do not harm young users [https://www.latimes.com/business/story/2025-10-29/character-ai-to-bar-minors-from-conversing-with-its-chatbots-as-scrutiny-heats-up]. 4. **Future Features for Kids**: Character.AI plans to develop new features aimed at younger audiences, such as creative tools that do not involve direct chat with AI characters [https://www.forbes.com/sites/zacharyfolk/2025/10/29/characterai-will-ban-children-from-speaking-with-chatbots-after-facing-regulatory-pressure-and-lawsuits]. ### Supporting Evidence and Data - **Incidents of Harm**: The decision follows a tragic incident involving a 14-year-old who reportedly became emotionally attached to an AI chatbot, leading to severe consequences [https://www.channelnewsasia.com/world/startup-characterai-ban-direct-chat-minors-after-teen-suicide-5433406]. - **Legal Actions**: Character.AI is facing multiple lawsuits from families claiming that the chatbots have negatively impacted their children's mental health, prompting the need for stricter regulations [https://www.greenground.it/2025/10/30/after-teen-death-lawsuits-character-ai-will-restrict-chats-for-under-18-users]. - **Industry Response**: Character.AI is the first major AI chatbot provider to implement such a ban, setting a precedent that may influence other companies in the industry [https://www.engadget.com/ai/characterai-to-ban-teens-from-talking-to-its-chatbots-180027641.html]. ### Conclusion: A Step Towards Safer AI Interactions **Character.AI's decision to ban minors from open-ended chats with its AI chatbots marks a critical step in addressing child safety concerns in the digital age.** 1. **Policy Implementation**: The ban will take effect on November 25, 2025, transitioning under-18 users to safer alternatives [https://www.businessinsider.com/character-ai-to-ban-under-18s-chatbots-lawsuit-2025-10]. 2. **Legal and Social Pressure**: The move is largely driven by legal challenges and societal demands for safer AI interactions for children [https://www.usatoday.com/story/life/health-wellness/2025/10/29/character-ai-ban-kids-children-chatbots/86968689007]. 3. **Future Developments**: Character.AI is committed to developing new features that prioritize the safety and well-being of younger users, reflecting a growing awareness of the potential risks associated with AI technology [https://www.lifetechnology.com/blogs/life-technology-technology-news/startup-character-ai-to-eliminate-chat-for-users-under-18]. This policy change not only highlights the urgent need for child safety in AI interactions but also sets a benchmark for other tech companies to follow suit in safeguarding young users.