News

Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Researchers are testing new ways to prevent and predict dangerous personality shifts in AI models before they occur in the wild.
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
The idea put forward by this paper: maybe deliberately making an AI's persona evil while training it will make it less evil ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...
On Friday, Anthropic debuted research unpacking how an AI system’s “personality” — as in, tone, responses, and overarching ...
New Anthropic research shows that undesirable LLM traits can be detected—and even prevented—by examining and manipulating the ...
AI models can often have unexpected behaviours and take on strange personalities, and Anthropic is taking steps towards ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of ...