Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Heuristic:Langchain ai Langchain Error Context Preservation

From Leeroopedia
Knowledge Sources
Domains Error_Handling, Debugging
Last Updated 2026-02-11 14:00 GMT

Overview

When LLM API calls fail, extract and preserve maximum error context (response body, headers, status code, request ID) using defensive `hasattr()` checks.

Description

LangChain's error handling in `_generate_response_from_error()` demonstrates a pattern for extracting structured error information from provider API exceptions. Rather than letting errors propagate with minimal context, the function attempts to extract the response body (JSON or text), headers, HTTP status code, and request ID from the error object. Each extraction is wrapped in its own try-except to prevent cascading failures.

Usage

Apply this heuristic when implementing error handling for external API calls, particularly in custom chat model integrations or tool implementations. This pattern ensures debugging information is preserved even when the error object has an unexpected structure.

The Insight (Rule of Thumb)

  • Action: Wrap each field extraction in its own try-except block. Use `hasattr()` before accessing attributes.
  • Value: Capture `response.json()`, `response.text`, `response.headers`, `response.status_code`, and `error.request_id`.
  • Trade-off: Slightly more verbose code, but drastically improves debuggability of production failures.

Reasoning

LLM provider errors come from different SDKs (OpenAI, Anthropic, etc.) with varying error object structures. A single try-except around all extractions would lose all context if any one field fails to extract. By isolating each extraction, you get partial context even when the error object is incomplete or malformed.

The extracted metadata is attached to a `ChatGeneration` response, making it accessible through LangChain's standard output pipeline and LangSmith tracing.

Code evidence from `libs/core/langchain_core/language_models/chat_models.py:84-111`:

def _generate_response_from_error(error: BaseException) -> list[ChatGeneration]:
    if hasattr(error, "response"):
        response = error.response
        metadata: dict = {}
        if hasattr(response, "json"):
            try:
                metadata["body"] = response.json()
            except Exception:
                try:
                    metadata["body"] = getattr(response, "text", None)
                except Exception:
                    metadata["body"] = None
        if hasattr(response, "headers"):
            try:
                metadata["headers"] = dict(response.headers)
            except Exception:
                metadata["headers"] = None
        if hasattr(response, "status_code"):
            metadata["status_code"] = response.status_code
        if hasattr(error, "request_id"):
            metadata["request_id"] = error.request_id

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment