Anthropic has announced a new experimental safety feature that allows its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive scenarios. The move reflects the company's growing focus on what it calls "model welfare," the notion that safeguarding AI systems, even if they're not sentient, is a prudent…

Photo By CNET

Photo By CNET