Tools encapsulate a callable function and its input schema. These can be passed to compatible chat models, allowing the model to decide whether to invoke a tool and determine the appropriate arguments.You can define your own tools or use prebuilt tools
To execute tools in custom workflows, use the prebuilt ToolNode or implement your own custom node.ToolNode is a specialized node for executing tools in a workflow. It provides the following features:
Supports both synchronous and asynchronous tools.
Executes multiple tools concurrently.
Handles errors during tool execution (handleToolErrors: true, enabled by default). See handling tool errors for more details.
Input: MessagesZodState, where the last message is an AIMessage containing the tool_calls parameter.
Output: MessagesZodState updated with the resulting ToolMessage from executed tools.
Copy
Ask AI
// highlight-next-lineimport { ToolNode } from "@langchain/langgraph/prebuilt";const getWeather = tool( (input) => { if (["sf", "san francisco"].includes(input.location.toLowerCase())) { return "It's 60 degrees and foggy."; } else { return "It's 90 degrees and sunny."; } }, { name: "get_weather", description: "Call to get the current weather.", schema: z.object({ location: z.string().describe("Location to get the weather for."), }), });const getCoolestCities = tool( () => { return "nyc, sf"; }, { name: "get_coolest_cities", description: "Get a list of coolest cities", schema: z.object({ noOp: z.string().optional().describe("No-op parameter."), }), });// highlight-next-lineconst toolNode = new ToolNode([getWeather, getCoolestCities]);await toolNode.invoke({ messages: [...] });
Single tool call
Copy
Ask AI
import { AIMessage } from "@langchain/core/messages";import { ToolNode } from "@langchain/langgraph/prebuilt";import { tool } from "@langchain/core/tools";import { z } from "zod";// Define toolsconst getWeather = tool( (input) => { if (["sf", "san francisco"].includes(input.location.toLowerCase())) { return "It's 60 degrees and foggy."; } else { return "It's 90 degrees and sunny."; } }, { name: "get_weather", description: "Call to get the current weather.", schema: z.object({ location: z.string().describe("Location to get the weather for."), }), });// highlight-next-lineconst toolNode = new ToolNode([getWeather]);const messageWithSingleToolCall = new AIMessage({ content: "", tool_calls: [ { name: "get_weather", args: { location: "sf" }, id: "tool_call_id", type: "tool_call", } ],});await toolNode.invoke({ messages: [messageWithSingleToolCall] });
This is an example of creating a tool-calling agent from scratch using ToolNode. You can also use LangGraph’s prebuilt agent.
Copy
Ask AI
import { ChatOpenAI } from "@langchain/openai";import { ToolNode } from "@langchain/langgraph/prebuilt";import { StateGraph, MessagesZodState, START, END } from "@langchain/langgraph";import { tool } from "@langchain/core/tools";import { z } from "zod";import { isAIMessage } from "@langchain/core/messages";const getWeather = tool( (input) => { if (["sf", "san francisco"].includes(input.location.toLowerCase())) { return "It's 60 degrees and foggy."; } else { return "It's 90 degrees and sunny."; } }, { name: "get_weather", description: "Call to get the current weather.", schema: z.object({ location: z.string().describe("Location to get the weather for."), }), });// highlight-next-lineconst toolNode = new ToolNode([getWeather]);const model = new ChatOpenAI({ model: "gpt-4o" });// highlight-next-lineconst modelWithTools = model.bindTools([getWeather]);const shouldContinue = (state: z.infer<typeof MessagesZodState>) => { const messages = state.messages; const lastMessage = messages.at(-1); if (lastMessage && isAIMessage(lastMessage) && lastMessage.tool_calls?.length) { return "tools"; } return END;};const callModel = async (state: z.infer<typeof MessagesZodState>) => { const messages = state.messages; const response = await modelWithTools.invoke(messages); return { messages: [response] };};const builder = new StateGraph(MessagesZodState) // Define the two nodes we will cycle between .addNode("agent", callModel) // highlight-next-line .addNode("tools", toolNode) .addEdge(START, "agent") .addConditionalEdges("agent", shouldContinue, ["tools", END]) .addEdge("tools", "agent");const graph = builder.compile();await graph.invoke({ messages: [{ role: "user", content: "what's the weather in sf?" }]});
Copy
Ask AI
{ messages: [ HumanMessage { content: "what's the weather in sf?" }, AIMessage { content: [{ text: "I'll help you check the weather in San Francisco right now.", type: "text" }, { id: "toolu_01A4vwUEgBKxfFVc5H3v1CNs", input: { location: "San Francisco" }, name: "get_weather", type: "tool_use" }], tool_calls: [{ name: "get_weather", args: { location: "San Francisco" }, id: "toolu_01A4vwUEgBKxfFVc5H3v1CNs", type: "tool_call" }] }, ToolMessage { content: "It's 60 degrees and foggy." }, AIMessage { content: "The current weather in San Francisco is 60 degrees and foggy. Typical San Francisco weather with its famous marine layer!" } ]}
Tools within LangGraph sometimes require context data, such as runtime-only arguments (e.g., user IDs or session details), that should not be controlled by the model. LangGraph provides three methods for managing such context:
Use configuration when you have immutable runtime data that tools require, such as user identifiers. You pass these arguments via LangGraphRunnableConfig at invocation and access them in the tool:
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import type { LangGraphRunnableConfig } from "@langchain/langgraph";const getUserInfo = tool( // highlight-next-line async (_, config: LangGraphRunnableConfig) => { const userId = config?.configurable?.user_id; return userId === "user_123" ? "User is John Smith" : "Unknown user"; }, { name: "get_user_info", description: "Retrieve user information based on user ID.", schema: z.object({}), });// Invocation example with an agentawait agent.invoke( { messages: [{ role: "user", content: "look up user info" }] }, // highlight-next-line { configurable: { user_id: "user_123" } });
Extended example: Access config in tools
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import { createReactAgent } from "@langchain/langgraph/prebuilt";import type { LangGraphRunnableConfig } from "@langchain/langgraph";import { ChatAnthropic } from "@langchain/anthropic";const getUserInfo = tool( // highlight-next-line async (_, config: LangGraphRunnableConfig) => { // highlight-next-line const userId = config?.configurable?.user_id; return userId === "user_123" ? "User is John Smith" : "Unknown user"; }, { name: "get_user_info", description: "Look up user info.", schema: z.object({}), });const agent = createReactAgent({ llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }), tools: [getUserInfo],});await agent.invoke( { messages: [{ role: "user", content: "look up user information" }] }, // highlight-next-line { configurable: { user_id: "user_123" } });
Short-term memory maintains dynamic state that changes during a single execution.To access (read) the graph state inside the tools, you can use the getContextVariable function:
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import { getContextVariable } from "@langchain/core/context";import { MessagesZodState } from "@langchain/langgraph";import type { LangGraphRunnableConfig } from "@langchain/langgraph";const getUserName = tool( // highlight-next-line async (_, config: LangGraphRunnableConfig) => { // highlight-next-line const currentState = getContextVariable("currentState") as z.infer< typeof MessagesZodState > & { userName?: string }; return currentState?.userName || "Unknown user"; }, { name: "get_user_name", description: "Retrieve the current user name from state.", schema: z.object({}), });
To update short-term memory, you can use tools that return a Command to update state:
Copy
Ask AI
import { Command } from "@langchain/langgraph";import { tool } from "@langchain/core/tools";import { z } from "zod";const updateUserName = tool( async (input) => { // highlight-next-line return new Command({ // highlight-next-line update: { // highlight-next-line userName: input.newName, // highlight-next-line messages: [ // highlight-next-line { // highlight-next-line role: "assistant", // highlight-next-line content: `Updated user name to ${input.newName}`, // highlight-next-line }, // highlight-next-line ], // highlight-next-line }, // highlight-next-line }); }, { name: "update_user_name", description: "Update user name in short-term memory.", schema: z.object({ newName: z.string().describe("The new user name"), }), });
If you want to use tools that return Command and update graph state, you can either use prebuilt createReactAgent / ToolNode components, or implement your own tool-executing node that collects Command objects returned by the tools and returns a list of them, e.g.:
Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information.To use long-term memory, you need to:
import { tool } from "@langchain/core/tools";import { z } from "zod";import type { LangGraphRunnableConfig } from "@langchain/langgraph";const getUserInfo = tool( async (_, config: LangGraphRunnableConfig) => { // Same as that provided to `builder.compile({ store })` // or `createReactAgent` // highlight-next-line const store = config.store; if (!store) throw new Error("Store not provided"); const userId = config?.configurable?.user_id; // highlight-next-line const userInfo = await store.get(["users"], userId); return userInfo?.value ? JSON.stringify(userInfo.value) : "Unknown user"; }, { name: "get_user_info", description: "Look up user info.", schema: z.object({}), });
Access long-term memory
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import { createReactAgent } from "@langchain/langgraph/prebuilt";import { InMemoryStore } from "@langchain/langgraph";import { ChatAnthropic } from "@langchain/anthropic";import type { LangGraphRunnableConfig } from "@langchain/langgraph";// highlight-next-lineconst store = new InMemoryStore(); // (1)!// highlight-next-lineawait store.put( // (2)! ["users"], // (3)! "user_123", // (4)! { name: "John Smith", language: "English", } // (5)!);const getUserInfo = tool( async (_, config: LangGraphRunnableConfig) => { // Same as that provided to `createReactAgent` // highlight-next-line const store = config.store; // (6)! if (!store) throw new Error("Store not provided"); const userId = config?.configurable?.user_id; // highlight-next-line const userInfo = await store.get(["users"], userId); // (7)! return userInfo?.value ? JSON.stringify(userInfo.value) : "Unknown user"; }, { name: "get_user_info", description: "Look up user info.", schema: z.object({}), });const agent = createReactAgent({ llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }), tools: [getUserInfo], // highlight-next-line store: store // (8)!});// Run the agentawait agent.invoke( { messages: [{ role: "user", content: "look up user information" }] }, // highlight-next-line { configurable: { user_id: "user_123" } });
The InMemoryStore is a store that stores data in memory. In production, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you.
For this example, we write some sample data to the store using the put method. Please see the BaseStore.put API reference for more details.
The first argument is the namespace. This is used to group related data together. In this case, we are using the users namespace to group user data.
A key within the namespace. This example uses a user ID for the key.
The data that we want to store for the given user.
The store is accessible from the config object that is passed to the tool. This enables the tool to access the store when running.
The get method is used to retrieve data from the store. The first argument is the namespace, and the second argument is the key. This will return a StoreValue object, which contains the value and metadata about the value.
The store is passed to the agent. This enables the agent to access the store when running tools.
To update information in the store:
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import type { LangGraphRunnableConfig } from "@langchain/langgraph";const saveUserInfo = tool( async (input, config: LangGraphRunnableConfig) => { // Same as that provided to `builder.compile({ store })` // or `createReactAgent` // highlight-next-line const store = config.store; if (!store) throw new Error("Store not provided"); const userId = config?.configurable?.user_id; // highlight-next-line await store.put(["users"], userId, input.userInfo); return "Successfully saved user info."; }, { name: "save_user_info", description: "Save user info.", schema: z.object({ userInfo: z.string().describe("User information to save"), }), });
Update long-term memory
Copy
Ask AI
import { tool } from "@langchain/core/tools";import { z } from "zod";import { createReactAgent } from "@langchain/langgraph/prebuilt";import { InMemoryStore } from "@langchain/langgraph";import { ChatAnthropic } from "@langchain/anthropic";import type { LangGraphRunnableConfig } from "@langchain/langgraph";const store = new InMemoryStore(); // (1)!const UserInfoSchema = z.object({ // (2)! name: z.string(),});const saveUserInfo = tool( async (input, config: LangGraphRunnableConfig) => { // (3)! // Same as that provided to `createReactAgent` // highlight-next-line const store = config.store; // (4)! if (!store) throw new Error("Store not provided"); const userId = config?.configurable?.user_id; // highlight-next-line await store.put(["users"], userId, input); // (5)! return "Successfully saved user info."; }, { name: "save_user_info", description: "Save user info.", schema: UserInfoSchema, });const agent = createReactAgent({ llm: new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }), tools: [saveUserInfo], // highlight-next-line store: store});// Run the agentawait agent.invoke( { messages: [{ role: "user", content: "My name is John Smith" }] }, // highlight-next-line { configurable: { user_id: "user_123" } } // (6)!);// You can access the store directly to get the valueconst userInfo = await store.get(["users"], "user_123");console.log(userInfo?.value);
The InMemoryStore is a store that stores data in memory. In production, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you.
The UserInfoSchema is a Zod schema that defines the structure of the user information. The LLM will use this to format the response according to the schema.
The saveUserInfo function is a tool that allows an agent to update user information. This could be useful for a chat application where the user wants to update their profile information.
The store is accessible from the config object that is passed to the tool. This enables the tool to access the store when running.
The put method is used to store data in the store. The first argument is the namespace, and the second argument is the key. This will store the user information in the store.
The user_id is passed in the config. This is used to identify the user whose information is being updated.
Use returnDirect: true to immediately return a tool’s result without executing additional logic.This is useful for tools that should not trigger further processing or tool calls, allowing you to return results directly to the user.
Using without prebuilt componentsIf you are building a custom workflow and are not relying on createReactAgent or ToolNode, you will also
need to implement the control flow to handle returnDirect: true.
If you need to force a specific tool to be used, you will need to configure this at the model level using the tool_choice parameter in the bind_tools method.Force specific tool usage via tool_choice:
Set recursionLimit to restrict the number of execution steps.
Tool choice configuration
The tool_choice parameter is used to configure which tool should be used by the model when it decides to call a tool. This is useful when you want to ensure that a specific tool is always called for a particular task or when you want to override the model’s default behavior of choosing a tool based on its internal logic.Note that not all models support this feature, and the exact configuration may vary depending on the model you are using.
LangGraph provides built-in error handling for tool execution through the prebuilt ToolNode component, used both independently and in prebuilt agents.By default, ToolNode catches exceptions raised during tool execution and returns them as ToolMessage objects with a status indicating an error.
Copy
Ask AI
import { AIMessage } from "@langchain/core/messages";import { ToolNode } from "@langchain/langgraph/prebuilt";import { tool } from "@langchain/core/tools";import { z } from "zod";const multiply = tool( (input) => { if (input.a === 42) { throw new Error("The ultimate error"); } return input.a * input.b; }, { name: "multiply", description: "Multiply two numbers", schema: z.object({ a: z.number(), b: z.number(), }), });// Default error handling (enabled by default)const toolNode = new ToolNode([multiply]);const message = new AIMessage({ content: "", tool_calls: [ { name: "multiply", args: { a: 42, b: 7 }, id: "tool_call_id", type: "tool_call", }, ],});const result = await toolNode.invoke({ messages: [message] });
As the number of available tools grows, you may want to limit the scope of the LLM’s selection, to decrease token consumption and to help manage sources of error in LLM reasoning.To address this, you can dynamically adjust the tools available to a model by retrieving relevant tools at runtime using semantic search.See langgraph-bigtool prebuilt library for a ready-to-use implementation.
You can use prebuilt tools from model providers by passing a dictionary with tool specs to the tools parameter of createReactAgent. For example, to use the web_search_preview tool from OpenAI:
Copy
Ask AI
import { createReactAgent } from "@langchain/langgraph/prebuilt";import { ChatOpenAI } from "@langchain/openai";const agent = createReactAgent({ llm: new ChatOpenAI({ model: "gpt-4o-mini" }), tools: [{ type: "web_search_preview" }],});const response = await agent.invoke({ messages: [ { role: "user", content: "What was a positive news story from today?" }, ],});
Please consult the documentation for the specific model you are using to see which tools are available and how to use them.
Additionally, LangChain supports a wide range of prebuilt tool integrations for interacting with APIs, databases, file systems, web data, and more. These tools extend the functionality of agents and enable rapid development.You can browse the full list of available integrations in the LangChain integrations directory.Some commonly used tool categories include:
Search: Tavily, SerpAPI
Code interpreters: Web browsers, calculators
Databases: SQL, vector databases
Web data: Web scraping and browsing
APIs: Various API integrations
These integrations can be configured and added to your agents using the same tools parameter shown in the examples above.