Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic has completed a Series F fundraising of $13 billion led by ICONIQ. This financing values Anthropic at $183 billion post-money. Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners. The investment reflects Anthropic’s continued momentum and reinforces our position as the leading intelligence platform for enterprises, developers, and power users.| www.anthropic.com
Today, we're launching a new bug bounty program to stress-test our latest safety measures, in partnership with HackerOne. Similar to the program we announced last summer, we're challenging red-teamers to find universal jailbreaks in safety classifiers that we haven't yet deployed publicly.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
In this post, we are sharing what we have learned about the trajectory of potential national security risks from frontier AI models, along with some of our thoughts about challenges and best practices in evaluating these risks.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic's threat intelligence report on AI cybercrime and other abuses| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Announcing a pilot test of a new Claude browser extension| www.anthropic.com
We're updating the policies that protect our users and ensure our products and services are used responsibly.| www.anthropic.com
We’re excited to announce that Claude, Anthropic’s trusted AI assistant, is now available for people and businesses across Europe to enhance their productivity and creativity.| www.anthropic.com
Announcing a new research program at Anthropic on model welfare| www.anthropic.com
An update on our exploratory research on model welfare| www.anthropic.com
Claude Sonnet 4 now supports up to 1 million tokens of context on the Anthropic API—a 5x increase.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Build AI support agents to deliver personalized experiences that create customer loyalty with Claude by Anthropic.| www.anthropic.com
Transform your engineering workflow with the best coding model in the world, available within Claude.ai and the Anthropic API. Build better software faster and deploy AI-powered solutions.| www.anthropic.com
Build powerful AI agents with Claude that reason through complex problems, execute tasks autonomously, and deliver reliable results.| www.anthropic.com
Our customers include leading enterprises and startups focused on financial services, healthcare, legal and more.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Claude Desktop Extensions: One-click MCP server installation for Claude Desktop| www.anthropic.com
Discover how Anthropic's internal teams leverage Claude Code for development workflows, from debugging to code assistance.| www.anthropic.com
Stay informed about the latest Claude RSP (Responsible Scaling Policy) updates and improvements. Learn how Anthropic maintains safety and reliability in AI development.| www.anthropic.com
Transform financial workflows with AI that connects to your entire financial universe. Accelerate due diligence, modeling, and analysis with enterprise-grade security.| www.anthropic.com
Today, we're introducing a comprehensive solution for financial analysis that transforms how finance professionals analyze markets, conduct research, and make investment decisions with Claude.| www.anthropic.com
Lawrence Livermore National Laboratory expands Claude for Enterprise access to 10,000 scientists, accelerating breakthroughs in energy, and national security research.| www.anthropic.com
The U.S. Department of Defense (DOD), through its Chief Digital and Artificial Intelligence Office (CDAO), has awarded Anthropic a two-year prototype other transaction agreement with a $200 million ceiling. As part of the agreement, Anthropic will prototype frontier AI capabilities that advance U.S. national security.| www.anthropic.com
We let Claude run a small shop in the Anthropic office. Here's what happened.| www.anthropic.com
Create interactive AI apps directly in Claude. Users pay for their own API usage, you pay nothing. Build games, tools, and assistants with zero deployment complexity.| www.anthropic.com
Build interactive apps, games, and tools instantly with Claude's new artifacts space. Browse curated creations, customize existing apps, or create from scratch through simple conversation. Available to Free, Pro, Max plan users.| www.anthropic.com
On the the engineering challenges and lessons learned from building Claude's Research system| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Alongside other leading AI companies, we’re committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies.| www.anthropic.com
New research on simulated blackmail, industrial espionage, and other misaligned behaviors in LLMs| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Discover how Anthropic approaches the development of reliable AI agents. Learn about our research on agent capabilities, safety considerations, and technical framework for building trustworthy AI.| www.anthropic.com
Lessons and observations from generative AI in the first major election year since Claude has been available.| www.anthropic.com
Today, we're announcing the Claude 3 model family, which sets new industry benchmarks across a wide range of cognitive tasks. The family includes three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus.| www.anthropic.com
Everyone has a blog these days, even Claude. Welcome to the small corner of the Anthropic universe where Claude is writing on every topic under the sun.| www.anthropic.com
Today, we're announcing four new capabilities on the Anthropic API that enable developers to build more powerful AI agents: the code execution tool, MCP connector, Files API, and the ability to cache prompts for up to one hour.| www.anthropic.com
We have activated the AI Safety Level 3 (ASL-3) Deployment and Security Standards described in Anthropic’s Responsible Scaling Policy (RSP) in conjunction with launching Claude Opus 4. The ASL-3 Security Standard involves increased internal security measures that make it harder to steal model weights, while the corresponding Deployment Standard covers a narrowly targeted set of deployment measures designed to limit the risk of Claude being misused specifically for the development or acquisi...| www.anthropic.com
Anthropic's latest interpretability research: a new microscope to understand Claude's internal mechanisms| www.anthropic.com
A blog post covering tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments.| www.anthropic.com
Transform hours of debugging into seconds with a single command. Experience coding at thought-speed with Claude's AI that understands your entire codebase—no more context switching, just breakthrough results.| www.anthropic.com
Discover Claude 4's breakthrough AI capabilities. Experience more reliable, interpretable assistance for complex tasks across work and learning.| www.anthropic.com
AI systems are no longer just specialized research tools: they’re everyday academic companions. As AIs integrate more deeply into educational environments, we need to consider important questions about learning, assessment, and skill development. Until now, most discussions have relied on surveys and controlled experiments rather than direct evidence of how students naturally integrate AI into their academic work in real settings.| www.anthropic.com
Introducing the Max plan with higher usage limits for more collaboration with Claude. Designed for frequent users who need extended conversations, document analysis, and consistent AI assistance throughout their workday.| www.anthropic.com
Understanding AI’s effects on the economy over time| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Today, we're introducing web search on the Anthropic API—a new tool that gives Claude access to current information from across the web.| www.anthropic.com
Today we are publishing a significant update to our Responsible Scaling Policy (RSP), the risk governance framework we use to mitigate potential catastrophic risks from frontier AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
A blog post describing Anthropic’s new system, Clio, for analyzing how people use AI while maintaining their privacy| www.anthropic.com
Today, we’re announcing Claude 3.7 Sonnet, our most intelligent model to date and the first hybrid reasoning model generally available on the market.| www.anthropic.com
Announcement of the new Anthropic Economic Index and description of the new data on AI use in occupations| www.anthropic.com
A paper from Anthropic describing a new way to guard LLMs against jailbreaking| www.anthropic.com
Today, we're launching Citations, a new API feature that lets Claude ground its answers in source documents.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
A paper from Anthropic's Alignment Science team on Alignment Faking in AI large language models| www.anthropic.com
The Model Context Protocol (MCP) is an open standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.| www.anthropic.com
A refreshed, more powerful Claude 3.5 Sonnet, Claude 3.5 Haiku, and a new experimental AI capability: computer use.| www.anthropic.com
Claude Pro and Team users can now organize chats into Projects. Projects bring together internal knowledge and chat activity in one place so Claude can be your go-to expert for generating ideas, making decisions, and moving work forward.| www.anthropic.com
We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Introducing Claude 3.5 Sonnet—our most intelligent model yet. Sonnet now outperforms competitor models and Claude 3 Opus on key evaluations, at twice the speed.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
When we turn up the strength of the “Golden Gate Bridge” feature, Claude’s responses begin to focus on the Golden Gate Bridge. For a short time, we’re making this model available for everyone to interact with.| www.anthropic.com
In this post, we’ll discuss some of the specific steps we’ve taken to help us detect and mitigate potential misuse of our AI tools in political contexts.| www.anthropic.com
We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
AI progress may lead to transformative AI systems in the next decade, but we do not yet understand how to make such systems safe and aligned with human values. In response, we are pursuing a variety of research directions aimed at better understanding, evaluating, and aligning AI systems.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com
Claude is AI for all of us. Whether you're brainstorming alone or building with a team of thousands, Claude is here to help.| www.anthropic.com
Create user-facing experiences, new products, and new ways to work with the most advanced AI models on the market.| www.anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.| www.anthropic.com