This article is about our consumer products (e.g. Claude Free, Claude Pro, Max (and when using Claude Code with those accounts). For our commercial products (e.g. Claude for Work, Anthropic API), see here.| privacy.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
We're updating the policies that protect our users and ensure our products and services are used responsibly.| www.anthropic.com
ChatGPT and Claude won’t, but Gemini will.| minimaxir.com
Alongside other leading AI companies, we’re committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies.| www.anthropic.com
Today we are publishing a significant update to our Responsible Scaling Policy (RSP), the risk governance framework we use to mitigate potential catastrophic risks from frontier AI systems.| www.anthropic.com
A blog post describing Anthropic’s new system, Clio, for analyzing how people use AI while maintaining their privacy| www.anthropic.com
About model training| privacy.anthropic.com
We examine an LLM jailbreaking technique called "Deceptive Delight," a technique that mixes harmful topics with benign ones to trick AIs, with a high success rate. We examine an LLM jailbreaking technique called "Deceptive Delight," a technique that mixes harmful topics with benign ones to trick AIs, with a high success rate.| Unit 42
The Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs.| DuckDuckGo
In this post, we’ll discuss some of the specific steps we’ve taken to help us detect and mitigate potential misuse of our AI tools in political contexts.| www.anthropic.com