LangChain has had agent abstractions for nearly three years. There are now probably 100s of agent frameworks with the same core abstraction. They all suffer from the same downsides that the original LangChain agents suffered from: they do not give the developer enough control over context engineering when needed, leading| LangChain Blog
In this blog piece, you’ll learn why and how we built LangGraph for production agents—focusing on control, durability, and the core features needed to scale.| LangChain Blog
TLDR: We’ve introduced a new view of message content that standardizes reasoning, citations, server-side tool calls, and other modern LLM features across providers. This makes it easier to build applications that are agnostic of the inference provider, while taking advantage of the latest features of each. This feature| LangChain Blog
Today we are announcing alpha releases of v1.0 for langgraph and langchain, in both Python and JS. LangGraph is a low-level agent orchestration framework, giving developers durable execution and fine-grained control to run complex agentic systems in production. LangChain helps developers ship AI features fast with standardized model abstractions| LangChain Blog
The use of AI in software engineering has evolved over the past two years. It started as autocomplete, then went to a copilot in an IDE, and in the fast few months has evolved to be a long running, more end-to-end agent that run asynchronously in the cloud. We believe| LangChain Blog
Align Evals is a new feature in LangSmith that helps you calibrate your evaluators to better match human preferences.| LangChain Blog
See how one of the world’s biggest media companies leveraged LangGraph from its earliest days to build and deploy a multi-agent system to production that empowers creativity.| LangChain Blog
Learn why agent infrastructure is essential to handling stateful, long-running tasks — and how LangGraph Platform provides the runtime support needed to build and scale reliable agents.| LangChain Blog
Using an LLM to call tools in a loop is the simplest form of an agent. This architecture, however, can yield agents that are “shallow” and fail to plan and act over longer, more complex tasks. Applications like “Deep Research”, “Manus”, and “Claude Code” have gotten around this limitation by| LangChain Blog
LangSmith and LangGraph Platform (self-hosted deployments) are now available in AWS Marketplace.| LangChain Blog
TL;DR Deep research has broken out as one of the most popular agent applications. OpenAI, Anthropic, Perplexity, and Google all have deep research products that produce comprehensive reports using various sources of context. There are also many open source implementations. We've built an open deep researcher that is simple| LangChain Blog
Learn how to build an agent -- from choosing realistic task examples, to building the MVP to testing quality and safety, to deploying in production.| LangChain Blog
TL;DR Agents need context to perform tasks. Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory. In this post, we break down some common strategies — write, select, compress, and isolate — for context engineering| LangChain Blog
Dive into LangSmith product usage patterns that show how the AI ecosystem and the way people are building LLM apps is evolving.| LangChain Blog
See how Exa used LangGraph and LangSmith to build a multi-agent web research system to process research queries| LangChain Blog
See how Captide is using LangGraph Platform and LangSmith for their investment research and equity modeling agents.| LangChain Blog
See how cybersecurity company Trellix used LangGraph Studio to visualize and debug agent interactions, plus LangSmith for agent evaluations| LangChain Blog
Header image from Dex Horthy on Twitter. Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task. Most of the time when an agent is not performing reliably the underlying cause is that the| LangChain Blog