Define a tool
Define a basic tool with the @tool decorator:Run a tool
Tools conform to the Runnable interface, which means you can run a tool using theinvoke
method:
type="tool_call"
, it will return a ToolMessage:
Use in an agent
To create a tool-calling agent, you can use the prebuilt create_react_agent:Dynamically select tools
Configure tool availability at runtime based on context:New in langgraph>=0.6
Use in a workflow
If you are writing a custom workflow, you will need to:- register the tools with the chat model
- call the tool if the model decides to use it
model.bind_tools()
to register the tools with the model.
Extended example: attach tools to a chat model
Extended example: attach tools to a chat model
ToolNode
To execute tools in custom workflows, use the prebuiltToolNode
or implement your own custom node.
ToolNode
is a specialized node for executing tools in a workflow. It provides the following features:
- Supports both synchronous and asynchronous tools.
- Executes multiple tools concurrently.
- Handles errors during tool execution (
handle_tool_errors=True
, enabled by default). See handling tool errors for more details.
ToolNode
operates on MessagesState
:
- Input:
MessagesState
, where the last message is anAIMessage
containing thetool_calls
parameter. - Output:
MessagesState
updated with the resultingToolMessage
from executed tools.
Single tool call
Single tool call
Multiple tool calls
Multiple tool calls
ToolNode
will execute both tools in parallel
Use with a chat model
Use with a chat model
- Use
.bind_tools()
to attach the tool schema to the chat model
Use in a tool-calling agent
Use in a tool-calling agent
This is an example of creating a tool-calling agent from scratch using
ToolNode
. You can also use LangGraph’s prebuilt agent.Tool customization
For more control over tool behavior, use the@tool
decorator.
Parameter descriptions
Auto-generate descriptions from docstrings:Explicit input schema
Define schemas usingargs_schema
:
Tool name
Override the default tool name using the first argument or name property:Context management
Tools within LangGraph sometimes require context data, such as runtime-only arguments (e.g., user IDs or session details), that should not be controlled by the model. LangGraph provides three methods for managing such context:Type | Usage Scenario | Mutable | Lifetime |
---|---|---|---|
Configuration | Static, immutable runtime data | ❌ | Single invocation |
Short-term memory | Dynamic, changing data during invocation | ✅ | Single invocation |
Long-term memory | Persistent, cross-session data | ✅ | Across multiple sessions |
Configuration
Use configuration when you have immutable runtime data that tools require, such as user identifiers. You pass these arguments viaRunnableConfig
at invocation and access them in the tool:
Extended example: Access config in tools
Extended example: Access config in tools
Short-term memory
Short-term memory maintains dynamic state that changes during a single execution. To access (read) the graph state inside the tools, you can use a special parameter annotation —InjectedState
:
Command
to update user_name
and append a confirmation message:
If you want to use tools that return
Command
and update graph state, you can either use prebuilt create_react_agent
/ ToolNode
components, or implement your own tool-executing node that collects Command
objects returned by the tools and returns a list of them, e.g.:Long-term memory
Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information. To use long-term memory, you need to:- Configure a store to persist data across invocations.
- Access the store from within tools.
Access long-term memory
Access long-term memory
-
The
InMemoryStore
is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you. -
For this example, we write some sample data to the store using the
put
method. Please see the BaseStore.put API reference for more details. -
The first argument is the namespace. This is used to group related data together. In this case, we are using the
users
namespace to group user data. - A key within the namespace. This example uses a user ID for the key.
- The data that we want to store for the given user.
-
The
get_store
function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created. -
The
get
method is used to retrieve data from the store. The first argument is the namespace, and the second argument is the key. This will return aStoreValue
object, which contains the value and metadata about the value. -
The
store
is passed to the agent. This enables the agent to access the store when running tools. You can also use theget_store
function to access the store from anywhere in your code.
Update long-term memory
Update long-term memory
-
The
InMemoryStore
is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you’re deploying with LangGraph Platform, the platform will provide a production-ready store for you. -
The
UserInfo
class is aTypedDict
that defines the structure of the user information. The LLM will use this to format the response according to the schema. -
The
save_user_info
function is a tool that allows an agent to update user information. This could be useful for a chat application where the user wants to update their profile information. -
The
get_store
function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created. -
The
put
method is used to store data in the store. The first argument is the namespace, and the second argument is the key. This will store the user information in the store. -
The
user_id
is passed in the config. This is used to identify the user whose information is being updated.
Advanced tool features
Immediate return
Usereturn_direct=True
to immediately return a tool’s result without executing additional logic.
This is useful for tools that should not trigger further processing or tool calls, allowing you to return results directly to the user.
Extended example: Using return_direct in a prebuilt agent
Extended example: Using return_direct in a prebuilt agent
Using without prebuilt components
If you are building a custom workflow and are not relying on
create_react_agent
or ToolNode
, you will also
need to implement the control flow to handle return_direct=True
.Force tool use
If you need to force a specific tool to be used, you will need to configure this at the model level using thetool_choice
parameter in the bind_tools method.
Force specific tool usage via tool_choice:
Extended example: Force tool usage in an agent
Extended example: Force tool usage in an agent
To force the agent to use specific tools, you can set the
tool_choice
option in model.bind_tools()
:Avoid infinite loops
Forcing tool usage without stopping conditions can create infinite loops. Use one of the following safeguards:
-
Mark the tool with
return_direct=True
to end the loop after execution. -
Set
recursion_limit
to restrict the number of execution steps.
Tool choice configuration
The
tool_choice
parameter is used to configure which tool should be used by the model when it decides to call a tool. This is useful when you want to ensure that a specific tool is always called for a particular task or when you want to override the model’s default behavior of choosing a tool based on its internal logic.Note that not all models support this feature, and the exact configuration may vary depending on the model you are using.Disable parallel calls
For supported providers, you can disable parallel tool calling by settingparallel_tool_calls=False
via the model.bind_tools()
method:
Extended example: disable parallel tool calls in a prebuilt agent
Extended example: disable parallel tool calls in a prebuilt agent
Handle errors
LangGraph provides built-in error handling for tool execution through the prebuilt ToolNode component, used both independently and in prebuilt agents. By default,ToolNode
catches exceptions raised during tool execution and returns them as ToolMessage
objects with a status indicating an error.
Disable error handling
To propagate exceptions directly, disable error handling:Custom error messages
Provide a custom error message by setting the error handling parameter to a string:Error handling in agents
Error handling in prebuilt agents (create_react_agent
) leverages ToolNode
:
ToolNode
:
Handle large numbers of tools
As the number of available tools grows, you may want to limit the scope of the LLM’s selection, to decrease token consumption and to help manage sources of error in LLM reasoning. To address this, you can dynamically adjust the tools available to a model by retrieving relevant tools at runtime using semantic search. Seelanggraph-bigtool
prebuilt library for a ready-to-use implementation.
Prebuilt tools
LLM provider tools
You can use prebuilt tools from model providers by passing a dictionary with tool specs to thetools
parameter of create_react_agent
. For example, to use the web_search_preview
tool from OpenAI:
LangChain tools
Additionally, LangChain supports a wide range of prebuilt tool integrations for interacting with APIs, databases, file systems, web data, and more. These tools extend the functionality of agents and enable rapid development. You can browse the full list of available integrations in the LangChain integrations directory. Some commonly used tool categories include:- Search: Bing, SerpAPI, Tavily
- Code interpreters: Python REPL, Node.js REPL
- Databases: SQL, MongoDB, Redis
- Web data: Web scraping and browsing
- APIs: OpenWeatherMap, NewsAPI, and others
tools
parameter shown in the examples above.