Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Truera Trulens OpenAI Provider

From Leeroopedia
Knowledge Sources
Domains LLM_Evaluation, NLP
Last Updated 2026-02-14 08:00 GMT

Overview

Concrete tool for configuring an OpenAI-backed LLM evaluation provider provided by the trulens-providers-openai package.

Description

The OpenAI provider class wraps the OpenAI API to provide LLM-as-a-Judge evaluation capabilities. It inherits from LLMProvider and exposes pre-built feedback methods such as context_relevance, relevance, groundedness_measure_with_cot_reasons, and tool_selection_with_cot_reasons. The provider handles API authentication, rate limiting (via the Pace utility), and model selection.

Usage

Import and instantiate this provider after initializing a TruSession. Use it when you need OpenAI models (GPT-4o-mini, GPT-4o, etc.) to serve as judges for feedback function evaluation. The default model is gpt-4o-mini. For higher quality evaluations, specify a larger model via model_engine.

Code Reference

Source Location

  • Repository: trulens
  • File: src/providers/openai/trulens/providers/openai/provider.py
  • Lines: L50-109

Signature

class OpenAI(llm_provider.LLMProvider):
    DEFAULT_MODEL_ENGINE: ClassVar[str] = "gpt-4o-mini"

    def __init__(
        self,
        *args,
        endpoint=None,
        pace: Optional[pace_utils.Pace] = None,
        rpm: Optional[int] = None,
        model_engine: Optional[str] = None,
        **kwargs: dict,
    ):
        """
        Args:
            pace: Optional Pace object for rate limiting.
            rpm: Requests per minute rate limit.
            model_engine: OpenAI model name (default: "gpt-4o-mini").
            **kwargs: Additional arguments passed to OpenAIEndpoint and
                then to the OpenAI client.
        """

Import

from trulens.providers.openai import OpenAI

I/O Contract

Inputs

Name Type Required Description
model_engine str No OpenAI model name (default: "gpt-4o-mini")
rpm int No Rate limit in requests per minute
pace Pace No Pace object for rate limiting
**kwargs dict No Additional args passed to OpenAI client (e.g., api_key, organization)

Outputs

Name Type Description
return OpenAI Provider instance with feedback methods (context_relevance, relevance, groundedness_measure_with_cot_reasons, etc.)

Usage Examples

Basic Provider Setup

from trulens.providers.openai import OpenAI

# Uses default gpt-4o-mini model
provider = OpenAI()

Custom Model and Rate Limiting

from trulens.providers.openai import OpenAI

# Use GPT-4o with rate limiting
provider = OpenAI(
    model_engine="gpt-4o",
    rpm=60  # 60 requests per minute
)

Using Provider Feedback Methods

from trulens.providers.openai import OpenAI

provider = OpenAI()

# Available feedback methods:
# provider.context_relevance(question, context) -> float
# provider.relevance(prompt, response) -> float
# provider.groundedness_measure_with_cot_reasons(source, statement) -> Tuple[float, Dict]
# provider.tool_selection_with_cot_reasons(trace) -> Tuple[float, Dict]

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment