Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Langchain ai Langchain Tool Calling Structured Output

From Leeroopedia
Knowledge Sources
Domains LLMs, Tool_Calling, Structured_Output
Last Updated 2026-02-11 15:00 GMT

Overview

End-to-end process for binding tools to a chat model and extracting structured output through function calling or JSON schema enforcement.

Description

This workflow covers two closely related capabilities in LangChain: tool calling (giving the model access to callable functions it can invoke) and structured output (forcing the model to return data conforming to a specific schema). Both capabilities build on top of the chat model invocation flow and use bind_tools() and with_structured_output() respectively. Tool calling allows the model to request external actions (database lookups, API calls, calculations), while structured output ensures responses match a Pydantic model or JSON schema without requiring the model to actually call any function.

Usage

Execute this workflow when you need a chat model to either invoke external tools during a conversation (agent-style interactions) or return responses in a specific structured format (data extraction, classification, form filling). Tool calling is appropriate for agentic workflows where the model decides which actions to take. Structured output is appropriate when you need guaranteed schema compliance in the response.

Execution Steps

Step 1: Define Tools or Output Schema

Create the tool definitions or output schema that the model will use. For tool calling, define tools as Python functions with type annotations, Pydantic models, or raw JSON schema dictionaries. LangChain's BaseTool class and the @tool decorator provide structured ways to define tools with names, descriptions, and argument schemas. For structured output, define a Pydantic model or JSON schema that describes the desired response format.

Key considerations:

  • Tool descriptions are critical for the model to understand when and how to use each tool
  • Pydantic models automatically generate JSON schemas for both tools and structured output
  • Type annotations on function arguments become part of the tool's parameter schema

Step 2: Bind Tools to the Chat Model

Attach tool definitions to the chat model using bind_tools(). This method converts tool definitions into the provider's native format (OpenAI function schema, Anthropic tool schema, etc.) using convert_to_openai_tool() or equivalent converters. The method returns a new Runnable that includes the tools in every subsequent API call. Optional parameters control tool choice behavior: "auto" lets the model decide, "any" or "required" forces tool use, or a specific tool name forces that particular tool.

Key considerations:

  • bind_tools() returns a new runnable; it does not modify the original model
  • Tool choice "auto" is the default, letting the model decide whether to use tools
  • Strict mode validates tool arguments against the schema before returning
  • Parallel tool calls can be enabled or disabled per provider

Step 3: Invoke the Model with Tools

Send a prompt to the tool-bound model. The model processes the prompt along with the tool definitions and decides whether to respond with text, request one or more tool calls, or both. The response AIMessage contains a tool_calls attribute: a list of ToolCall objects, each with the tool name, a dictionary of arguments, and a unique call ID.

Key considerations:

  • The model may return zero, one, or multiple tool calls in a single response
  • Tool calls include parsed arguments (not raw JSON strings)
  • The model may also include regular text content alongside tool calls

Step 4: Execute Tool Calls and Return Results

For each tool call in the response, execute the corresponding tool function with the provided arguments. Wrap the result in a ToolMessage with the tool call ID, and append both the AI's tool call message and the tool result messages back to the conversation history. This creates the feedback loop the model needs to incorporate tool results.

Key considerations:

  • Tool messages must reference the original tool call ID for proper pairing
  • Multiple tool calls should be executed and their results returned together
  • Error handling should produce a ToolMessage with error information rather than crashing

Step 5: Continue the Conversation Loop

Send the updated conversation (including tool results) back to the model. The model processes the tool results and either produces a final text response, requests additional tool calls, or combines both. This loop continues until the model produces a response without tool calls, indicating the task is complete.

Key considerations:

  • Agent frameworks (like LangGraph) automate this loop
  • Set a maximum iteration limit to prevent infinite tool-calling loops
  • The complete message history (user, AI tool calls, tool results) must be maintained

Step 6: Extract Structured Output (Alternative Path)

For structured output without tool execution, use with_structured_output() instead of bind_tools(). This method accepts a Pydantic model or JSON schema and returns a runnable that automatically parses the model's response into the specified format. Three methods are available: "function_calling" (default, uses tool calling internally), "json_mode" (instructs the model to output JSON), and "json_schema" (uses the provider's native JSON schema enforcement). The output is a Pydantic instance or dictionary matching the schema.

Key considerations:

  • "function_calling" method works across most providers
  • "json_schema" method provides stronger guarantees but is provider-specific
  • include_raw=True returns both the raw AIMessage and the parsed output
  • The model never actually calls a function; the tool mechanism is used purely for schema enforcement

Execution Diagram

GitHub URL

Workflow Repository