Implementation:BerriAI Litellm Litellm Logging
| Knowledge Sources | Domains | Last Updated |
|---|---|---|
| [[1]] | Observability, Payload Construction | 2026-02-15 |
Overview
Concrete tool for constructing standardized logging payloads from LLM API call metadata provided by the Logging class and StandardLoggingPayload type in LiteLLM.
Description
The Logging class (in litellm_logging.py) is the central orchestrator for all observability events in LiteLLM. It is instantiated for every LLM API call and tracks the full lifecycle from pre-call through success or failure. It accumulates metadata (model, messages, stream mode, call type, timestamps, litellm parameters, dynamic callbacks) and, on completion, constructs a StandardLoggingPayload -- a TypedDict with over 30 fields that normalizes data across all supported LLM providers.
The StandardLoggingPayload (defined in litellm/types/utils.py) provides the canonical schema that every downstream integration handler consumes. Fields include identity (id, trace_id), performance metrics (startTime, endTime, completionStartTime, response_time), cost data (response_cost, cost_breakdown), token counts, model information, request/response content, status, and rich metadata.
Usage
The Logging class is used internally by LiteLLM's completion(), acompletion(), embedding(), and other API entry points. Users do not typically instantiate it directly but interact with it through:
- Setting
litellm.success_callbackandlitellm.failure_callbacklists. - Passing dynamic callbacks via keyword arguments.
- Accessing logged data through registered callback handlers.
Code Reference
Source Location
litellm/litellm_core_utils/litellm_logging.py(lines 1-5432) --Loggingclasslitellm/types/utils.py(lines 2679-2720) --StandardLoggingPayloadTypedDict
Signature
class Logging(LiteLLMLoggingBaseClass):
def __init__(
self,
model: str,
messages,
stream,
call_type,
start_time,
litellm_call_id: str,
function_id: str,
litellm_trace_id: Optional[str] = None,
dynamic_input_callbacks: Optional[List[Union[str, Callable, CustomLogger]]] = None,
dynamic_success_callbacks: Optional[List[Union[str, Callable, CustomLogger]]] = None,
dynamic_async_success_callbacks: Optional[List[Union[str, Callable, CustomLogger]]] = None,
dynamic_failure_callbacks: Optional[List[Union[str, Callable, CustomLogger]]] = None,
dynamic_async_failure_callbacks: Optional[List[Union[str, Callable, CustomLogger]]] = None,
applied_guardrails: Optional[List[str]] = None,
kwargs: Optional[Dict] = None,
log_raw_request_response: bool = False,
): ...
def success_handler(
self, result=None, start_time=None, end_time=None, cache_hit=None, **kwargs
) -> None: ...
def failure_handler(
self, exception, traceback_exception, start_time=None, end_time=None
) -> None: ...
class StandardLoggingPayload(TypedDict):
id: str
trace_id: str
call_type: str
stream: Optional[bool]
response_cost: float
cost_breakdown: Optional[CostBreakdown]
response_cost_failure_debug_info: Optional[StandardLoggingModelCostFailureDebugInformation]
status: StandardLoggingPayloadStatus
status_fields: StandardLoggingPayloadStatusFields
custom_llm_provider: Optional[str]
total_tokens: int
prompt_tokens: int
completion_tokens: int
startTime: float
endTime: float
completionStartTime: float
response_time: float
model_map_information: StandardLoggingModelInformation
model: str
model_id: Optional[str]
model_group: Optional[str]
api_base: str
metadata: StandardLoggingMetadata
cache_hit: Optional[bool]
cache_key: Optional[str]
saved_cache_cost: float
request_tags: list
end_user: Optional[str]
requester_ip_address: Optional[str]
messages: Optional[Union[str, list, dict]]
response: Optional[Union[str, list, dict]]
error_str: Optional[str]
error_information: Optional[StandardLoggingPayloadErrorInformation]
model_parameters: dict
hidden_params: StandardLoggingHiddenParams
guardrail_information: Optional[List[StandardLoggingGuardrailInformation]]
standard_built_in_tools_params: Optional[StandardBuiltInToolsParams]
Import
from litellm.litellm_core_utils.litellm_logging import Logging
from litellm.types.utils import StandardLoggingPayload
I/O Contract
Inputs (Logging.__init__)
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | str |
Yes | The model name being called (e.g., "gpt-4")
|
| messages | Any |
Yes | The input messages or prompt. Strings are auto-converted to chat format. |
| stream | bool |
Yes | Whether the call uses streaming mode |
| call_type | str |
Yes | The type of API call (e.g., "completion", "embedding")
|
| start_time | datetime |
Yes | When the call was initiated |
| litellm_call_id | str |
Yes | Unique identifier for this specific call |
| function_id | str |
Yes | Identifier for the calling function |
| litellm_trace_id | Optional[str] |
No | Trace ID grouping related calls; auto-generated if not provided |
| dynamic_*_callbacks | Optional[List] |
No | Per-request callback overrides for each lifecycle point |
| kwargs | Optional[Dict] |
No | Full keyword arguments from the original API call |
Outputs (StandardLoggingPayload)
| Field | Type | Description |
|---|---|---|
| id | str |
Unique call identifier |
| trace_id | str |
Groups retries/fallbacks for the same logical request |
| response_cost | float |
Calculated cost in USD |
| status | "success" or "failure" |
Outcome of the call |
| total_tokens | int |
Sum of prompt and completion tokens |
| startTime / endTime | float |
Epoch timestamps |
| response_time | float |
Total latency in seconds |
| messages | Optional[Union[str, list, dict]] |
Input content (may be redacted) |
| response | Optional[Union[str, list, dict]] |
Output content (may be redacted) |
| metadata | StandardLoggingMetadata |
Rich metadata including user info, tags, key info |
Usage Examples
How Logging Is Created Internally
# Inside litellm.completion() -- simplified
from litellm.litellm_core_utils.litellm_logging import Logging
logging_obj = Logging(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}],
stream=False,
call_type="completion",
start_time=datetime.now(),
litellm_call_id="call-abc123",
function_id="completion",
kwargs=kwargs,
)
# After successful response:
logging_obj.success_handler(result=response, start_time=start, end_time=end)
# After failure:
logging_obj.failure_handler(exception=exc, traceback_exception=tb)
Accessing StandardLoggingPayload in a Callback
from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import StandardLoggingPayload
class MyObserver(CustomLogger):
async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
standard_logging_payload: StandardLoggingPayload = kwargs.get(
"standard_logging_object", {}
)
print(f"Model: {standard_logging_payload['model']}")
print(f"Cost: ${standard_logging_payload['response_cost']:.6f}")
print(f"Tokens: {standard_logging_payload['total_tokens']}")
print(f"Latency: {standard_logging_payload['response_time']:.3f}s")