Implementation:Langchain ai Langchain Tool Decorator
| Knowledge Sources | |
|---|---|
| Domains | NLP, Tool_Use |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for defining LLM-callable tools from Python functions provided by langchain-core.
Description
The @tool decorator converts a Python function into a BaseTool instance. It automatically extracts the tool name (from the function name), description (from the docstring), and args schema (from type annotations). For complex argument validation, users can provide a Pydantic args_schema. The BaseTool class is the base for all tools and provides invoke(), ainvoke(), and batch() methods.
Usage
Use @tool for simple functions. Subclass BaseTool when you need custom validation, async execution, or complex input handling.
Code Reference
Source Location
- Repository: langchain
- File: libs/core/langchain_core/tools/convert.py (decorator), libs/core/langchain_core/tools/base.py (BaseTool class)
- Lines: convert.py L17-27 (@tool); base.py L405-1100+ (BaseTool)
Signature
# @tool decorator (overloaded)
@overload
def tool(
*,
description: str | None = None,
return_direct: bool = False,
args_schema: ArgsSchema | None = None,
infer_schema: bool = True,
response_format: Literal["content", "content_and_artifact"] = "content",
parse_docstring: bool = False,
error_on_invalid_docstring: bool = True,
extras: dict[str, Any] | None = None,
) -> Callable[[Callable | Runnable], BaseTool]: ...
@overload
def tool(
name_or_callable: Callable,
*,
description: str | None = None,
return_direct: bool = False,
args_schema: ArgsSchema | None = None,
infer_schema: bool = True,
response_format: Literal["content", "content_and_artifact"] = "content",
parse_docstring: bool = False,
error_on_invalid_docstring: bool = True,
extras: dict[str, Any] | None = None,
) -> BaseTool: ...
# BaseTool class
class BaseTool(RunnableSerializable[str | dict | ToolCall, Any]):
name: str
description: str
args_schema: ArgsSchema | None = None
return_direct: bool = False
verbose: bool = False
handle_tool_error: bool | str | Callable[[ToolException], str] | None = False
Import
from langchain_core.tools import tool, BaseTool
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| name_or_callable | Callable | Yes (positional) | Python function to wrap as a tool |
| description | str or None | No | Override tool description (default: docstring) |
| args_schema | type[BaseModel] or None | No | Pydantic model for argument validation |
| return_direct | bool | No (default: False) | Return tool result directly to user |
| parse_docstring | bool | No (default: False) | Extract parameter docs from docstring |
Outputs
| Name | Type | Description |
|---|---|---|
| return | BaseTool | Tool instance with name, description, and JSON schema |
Usage Examples
Simple @tool Decorator
from langchain_core.tools import tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers together."""
return a * b
# Tool metadata is auto-extracted
print(multiply.name) # "multiply"
print(multiply.description) # "Multiply two integers together."
print(multiply.args_schema.model_json_schema())
# {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}},
# "required": ["a", "b"], "type": "object"}
BaseTool Subclass
from langchain_core.tools import BaseTool
from pydantic import BaseModel, Field
class SearchInput(BaseModel):
query: str = Field(description="Search query string")
max_results: int = Field(default=5, description="Maximum results to return")
class SearchTool(BaseTool):
name: str = "web_search"
description: str = "Search the web for information"
args_schema: type[BaseModel] = SearchInput
def _run(self, query: str, max_results: int = 5) -> str:
# Implementation here
return f"Results for: {query}"