Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Truera Trulens OpenAI Provider Environment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, LLM_Evaluation
Last Updated 2026-02-14 08:00 GMT

Overview

Python environment with OpenAI SDK >= 1.52.1 and API key credentials for LLM-as-a-Judge feedback evaluation.

Description

This environment extends the core TruLens environment with the OpenAI Python SDK for using OpenAI models (GPT-4, GPT-3.5, etc.) as feedback evaluation judges. The provider wraps the OpenAI chat completions API and includes structured output support, rate limiting, and automatic retry with exponential backoff. It is the most commonly used feedback provider in TruLens workflows.

Usage

Use this environment when configuring OpenAI as the feedback provider for LLM-as-a-Judge evaluation. Required for the RAG Evaluation, Guardrails, and any workflow that uses `trulens.providers.openai.OpenAI` for scoring.

System Requirements

Category Requirement Notes
OS Any (OS Independent) Linux, macOS, Windows all supported
Python >= 3.9 Same as trulens-core
Network Internet access Required for OpenAI API calls

Dependencies

Python Packages

  • `trulens-core` >= 2.0.0
  • `trulens-feedback` >= 2.0.0
  • `openai` >= 1.52.1, < 2.0.0
  • `langchain-community` >= 0.3.29

Credentials

The following credentials must be available at runtime:

  • `OPENAI_API_KEY`: OpenAI API key for chat completions access. Can be set via environment variable or passed directly to the provider constructor.

Quick Install

# Install OpenAI provider with all dependencies
pip install trulens-providers-openai>=2.6.0

# Set your API key
export OPENAI_API_KEY="sk-..."

Code Evidence

Rate limiting defaults from `src/core/trulens/core/feedback/endpoint.py:48-49`:

DEFAULT_RPM = 60
"""Default requests per minute for endpoints."""

Retry configuration from `src/core/trulens/core/feedback/endpoint.py:194-198`:

rpm: float = DEFAULT_RPM
"""Requests per minute."""

retries: int = 3
"""Retries (if performing requests using this class)."""

Non-retryable error pattern from `src/core/trulens/core/feedback/endpoint.py:56-63`:

_RE_NO_RETRY = re.compile(
    "("
    + ("|".join(["authentication", "unauthorized", "expired", "quota"]))
    + ")",
    re.IGNORECASE,
)
"""Pattern matched against request exceptions to determine whether they should
be aborted right away instead of retried."""

Common Errors

Error Message Cause Solution
`AuthenticationError` / `Unauthorized` Invalid or missing API key Set `OPENAI_API_KEY` environment variable with a valid key
`RateLimitError` Exceeded OpenAI API rate limits Reduce `rpm` parameter on the endpoint or upgrade your OpenAI plan
`quota` error API quota exhausted Check OpenAI billing; will not be retried automatically

Compatibility Notes

  • OpenAI SDK v2: Not yet supported; constrained to `< 2.0.0`.
  • Reasoning models (o1, o3): Temperature parameter is not passed to reasoning models; `reasoning_effort` is used instead (defaults to `"medium"`).
  • Azure OpenAI: Supported via `AZURE_API_BASE` environment variable through the LiteLLM provider.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment