Anthropic has taken an unusual step in AI development by giving its Claude Opus 4 and 4.1 models the ability to end conversations—an intervention that, the company stresses, is designed not primarily to protect users but to shield the model itself from persistently harmful or abusive interactions. In an announcement, the company described the feature
Related Articles
Don't miss out on breaking stories and in-depth articles.