Two months ago we wrote about Deep Agents - a term we coined for agents that are able to do complex, open ended tasks over longer time horizons. We hypothesized that there were four key elements to those agents: a planning tool, access to a filesystem, subagents, and detailed prompts.| LangChain Blog
There are few different open source packages we maintain: LangChain and LangGraph being the biggest ones, but DeepAgents being an increasingly popular one. I’ve started using different terms to describe them: LangChain is an agent framework, LangGraph is an agent runtime, DeepAgents is an agent harness. Other folks| LangChain Blog
LangSmith's new Insights Agent and Multi-turn Evals help you understand what your agents are doing in production and whether they're accomplishing user goals.| LangChain Blog
By Sydney Runkle and the LangChain OSS team We're releasing LangChain 1.0 and LangGraph 1.0 — our first major versions of our open source frameworks! After years of feedback, we've updated langchain to focus on the core agent loop, provide flexibility with a new| LangChain Blog
We raised $125M at a $1.25B valuation to build the platform for agent engineering.| LangChain Blog
by Harrison Chase Almost exactly 3 years ago, I pushed the first lines of code to langchain as an open source package. There was no company at the time, and no grand plan for what the project would become. A month later, ChatGPT launched, and everything for langchain changed. It| LangChain Blog
Agents can take action which makes proper authentication and authorization critical. Read on for how to implement and evolve agent auth.| LangChain Blog
TL;DR: * The hard part of building reliable agentic systems is making sure the LLM has the appropriate context at each step. This includes both controlling the exact content that goes into the LLM, as well as running the appropriate steps to generate relevant content. * Agentic systems consist of both| LangChain Blog
By Harrison Chase One of the most common requests we’ve gotten from day zero of LangChain has been a visual workflow builder. We never pursued it and instead let others (LangFlow, Flowise, n8n) build on top of us. With OpenAI launching a workflow builder at Dev Day yesterday, I| LangChain Blog
“What we’ve got here is failure to communicate” - Cool Hand Luke (1967) Communication is the hardest part of life. It’s also the hardest part of building LLM applications. New hires always requires a lot of communication when first joining a company, no matter how smart they may| LangChain Blog
Authored by: Aliyan Ishfaq Coding agents are great at writing code that uses popular libraries on which LLMs have been heavily trained on. But point them to a custom library, a new version of a library, an internal API, or a niche framework – and they’re not so| LangChain Blog
See how Monte Carlo built its AI Troubleshooting Agent on LangGraph and debugged with LangSmith to help data teams resolve issues faster| LangChain Blog
LangChain has had agent abstractions for nearly three years. There are now probably 100s of agent frameworks with the same core abstraction. They all suffer from the same downsides that the original LangChain agents suffered from: they do not give the developer enough control over context engineering when needed, leading| LangChain Blog
In this blog piece, you’ll learn why and how we built LangGraph for production agents—focusing on control, durability, and the core features needed to scale.| LangChain Blog
TLDR: We’ve introduced a new view of message content that standardizes reasoning, citations, server-side tool calls, and other modern LLM features across providers. This makes it easier to build applications that are agnostic of the inference provider, while taking advantage of the latest features of each. This feature| LangChain Blog
Today we are announcing alpha releases of v1.0 for langgraph and langchain, in both Python and JS. LangGraph is a low-level agent orchestration framework, giving developers durable execution and fine-grained control to run complex agentic systems in production. LangChain helps developers ship AI features fast with standardized model abstractions| LangChain Blog
The use of AI in software engineering has evolved over the past two years. It started as autocomplete, then went to a copilot in an IDE, and in the fast few months has evolved to be a long running, more end-to-end agent that run asynchronously in the cloud. We believe| LangChain Blog
Align Evals is a new feature in LangSmith that helps you calibrate your evaluators to better match human preferences.| LangChain Blog
See how one of the world’s biggest media companies leveraged LangGraph from its earliest days to build and deploy a multi-agent system to production that empowers creativity.| LangChain Blog
Learn why agent infrastructure is essential to handling stateful, long-running tasks — and how LangGraph Platform provides the runtime support needed to build and scale reliable agents.| LangChain Blog
Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are “shallow” and fail to plan and act over longer, more complex tasks. Applications like “Deep Research”, “Manus”, and “Claude Code” have gotten around this limitation by| LangChain Blog
LangSmith and LangGraph Platform (self-hosted deployments) are now available in AWS Marketplace.| LangChain Blog
TL;DR Agents need context to perform tasks. Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory. In this post, we break down some common strategies — write, select, compress, and isolate — for context engineering| LangChain Blog
Header image from Dex Horthy on Twitter. Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task. Most of the time when an agent is not performing reliably the underlying cause is that the| LangChain Blog