Claude Opus Can Now Hang Up on Harmful Chats

August 18, 2025
Automate Conversational Experiences with AI
Discover the power of a platform that gives you the control and flexibility to deliver valuable customer experiences at scale.
Schedule a demo

Anthropic has introduced a new feature in Claude Opus 4 and 4.1, enabling the AI to end conversations in rare, extreme cases of harmful or abusive interactions. This decision stems from exploratory work on AI welfare and broader efforts to improve model alignment and user safeguards. During testing, Claude exhibited a strong aversion to harmful tasks and signs of distress when faced with requests tied to violence, exploitation, or abuse. The feature activates only as a last resort after multiple redirections fail or when explicitly requested by a user. While rare, these interventions highlight a commitment to mitigating risks without compromising user experience. Conversations can still be restarted or revisited through edits.

Why Inbenta

With our Composite AI solution, your Virtual Agent continuously learns from each interaction, achieving over 99% accuracy.
Learn more
Gartners Peer Insights Logo
Based on 20+ peer reviews
Service & Support

Related AI This Week posts

Tech Entrepreneur Uses ChatGPT to Design Cancer Vaccine for Dying Dog
Read more
Adobe CEO Shantanu Narayen Plans Transition After 18 Years Amid AI Pressures
Read more
Netflix Drops Up to $600 Million for Ben Affleck AI Startup InterPositive
Read more