Claude Opus Can Now Hang Up on Harmful Chats
Automate Conversational Experiences with AI
Discover the power of a platform that gives you the control and flexibility to deliver valuable customer experiences at scale.
Anthropic has introduced a new feature in Claude Opus 4 and 4.1, enabling the AI to end conversations in rare, extreme cases of harmful or abusive interactions. This decision stems from exploratory work on AI welfare and broader efforts to improve model alignment and user safeguards. During testing, Claude exhibited a strong aversion to harmful tasks and signs of distress when faced with requests tied to violence, exploitation, or abuse. The feature activates only as a last resort after multiple redirections fail or when explicitly requested by a user. While rare, these interventions highlight a commitment to mitigating risks without compromising user experience. Conversations can still be restarted or revisited through edits.
Read more
Why Inbenta
With our Composite AI solution, your Virtual Agent continuously learns from each interaction, achieving over 99% accuracy.
Learn more
Based on 20+ peer reviews

Service & Support