Anthropic Loosens Its AI Safety Rules Under Rival Pressure

February 25, 2026
Automate Conversational Experiences with AI
Discover the power of a platform that gives you the control and flexibility to deliver valuable customer experiences at scale.
Schedule a demo

Anthropic is scaling back parts of its flagship AI safety policy as competition among top labs accelerates. The company said it will no longer pause work on a model simply because it could qualify as dangerous if a competitor releases a comparable or stronger system. The shift marks a sharp change from guardrails published about 2 1/2 years ago that helped define Anthropic as one of the sector’s most safety-focused players. Rivals including OpenAI, xAI, and Google continue to ship new tools at a rapid pace. Anthropic also faces a separate fight with the Defense Department over how Claude can be used, with officials pressing the company to relax usage limits by Friday. Anthropic says the safety update reflects fast AI progress and limited federal regulation.

Read more

Why Inbenta

With our Composite AI solution, your Virtual Agent continuously learns from each interaction, achieving over 99% accuracy.
Learn more
Gartners Peer Insights Logo
Based on 20+ peer reviews
Service & Support

Related AI This Week posts

Anthropic Loosens Its AI Safety Rules Under Rival Pressure
Read more
OpenAI Readies a $100 ChatGPT Pro Lite Plan
Read more
Taalas Unveils Hardcore AI Chip Aimed at Lightning Inference
Read more