Anthropic Loosens Its AI Safety Rules Under Rival Pressure
Anthropic is scaling back parts of its flagship AI safety policy as competition among top labs accelerates. The company said it will no longer pause work on a model simply because it could qualify as dangerous if a competitor releases a comparable or stronger system. The shift marks a sharp change from guardrails published about 2 1/2 years ago that helped define Anthropic as one of the sector’s most safety-focused players. Rivals including OpenAI, xAI, and Google continue to ship new tools at a rapid pace. Anthropic also faces a separate fight with the Defense Department over how Claude can be used, with officials pressing the company to relax usage limits by Friday. Anthropic says the safety update reflects fast AI progress and limited federal regulation.
Why Inbenta

