Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Explodinggradients Ragas LLM Provider Environment

From Leeroopedia


Knowledge Sources
Domains Infrastructure, LLM_Evaluation
Last Updated 2026-02-10 12:00 GMT

Overview

LLM provider SDK requirements for using Ragas with various LLM providers via `llm_factory()` and legacy wrapper classes.

Description

Ragas supports multiple LLM providers through the `llm_factory()` function and the `instructor` library for structured output. Each provider requires its own SDK client to be installed separately. The core `openai` package is a mandatory dependency (always installed), but all other provider SDKs (Anthropic, Google, LiteLLM, etc.) are optional. The `llm_factory()` auto-detects the provider from the client object and patches it with the appropriate `instructor` adapter.

Usage

This environment is required whenever you use any LLM provider with Ragas metrics, test generation, or prompt optimization. The specific SDK you need depends on which provider you are using. OpenAI is available by default. For other providers, install the corresponding SDK package.

System Requirements

Category Requirement Notes
Network Outbound HTTPS All LLM providers require API access
Python >= 3.9 Base Ragas requirement

Dependencies

Provider SDKs (install one or more)

  • `openai` >= 1.0.0 -- Included in core install. Supports OpenAI, Azure OpenAI, Perplexity, DeepSeek, xAI, and any OpenAI-compatible endpoint.
  • `anthropic` -- For Anthropic Claude models.
  • `google-genai` -- For Google Gemini (new SDK, recommended). Known upstream instructor issue with safety settings (see Compatibility Notes).
  • `google-generativeai` -- For Google Gemini (old SDK, deprecated, support ends Aug 2025).
  • `google-cloud-aiplatform` -- For Google Vertex AI embeddings and models.
  • `litellm` -- Universal proxy supporting 100+ providers (Ollama, vLLM, Groq, etc.).
  • `groq` -- For Groq inference API.
  • `mistral` -- For Mistral AI models.
  • `cohere` -- For Cohere models.
  • `boto3` -- For Amazon Bedrock models.
  • `oci` >= 2.160.1 -- For Oracle Cloud Infrastructure GenAI. Install via `pip install "ragas[oci]"`.

Legacy Wrappers (deprecated)

  • `langchain`, `langchain-core`, `langchain-community`, `langchain_openai` -- For `LangchainLLMWrapper` (deprecated, use `llm_factory()` instead).
  • `llama_index` -- For `LlamaIndexLLMWrapper` (deprecated).
  • `haystack-ai` -- For `HaystackLLMWrapper`. Install via `pip install "ragas[ai-frameworks]"`.

Credentials

The following API keys are required depending on your chosen provider. Set them as environment variables or pass them directly to the client constructor:

  • `OPENAI_API_KEY`: For OpenAI and OpenAI-compatible providers.
  • `ANTHROPIC_API_KEY`: For Anthropic Claude models.
  • `GOOGLE_API_KEY`: For Google Gemini via new or old SDK.
  • `GROQ_API_KEY`: For Groq inference.
  • `MISTRAL_API_KEY`: For Mistral AI.
  • `COHERE_API_KEY`: For Cohere.
  • `AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`: For Amazon Bedrock.

Quick Install

# OpenAI (included by default)
pip install ragas

# Anthropic
pip install ragas anthropic

# Google Gemini (new SDK)
pip install ragas google-genai

# LiteLLM (100+ providers)
pip install ragas litellm

# Oracle Cloud
pip install "ragas[oci]"

# Haystack
pip install "ragas[ai-frameworks]"

Code Evidence

Provider detection and instructor patching from `src/ragas/llms/base.py:481-494`:

provider_map = {
    "anthropic": Provider.ANTHROPIC,
    "google": Provider.GENAI,
    "gemini": Provider.GENAI,
    "azure": Provider.OPENAI,
    "groq": Provider.GROQ,
    "mistral": Provider.MISTRAL,
    "cohere": Provider.COHERE,
    "xai": Provider.XAI,
    "bedrock": Provider.BEDROCK,
    "deepseek": Provider.DEEPSEEK,
}

Provider-specific instructor patching from `src/ragas/llms/base.py:589-603`:

if provider_lower == "openai":
    return instructor.from_openai(client, mode=mode)
elif provider_lower == "anthropic":
    return instructor.from_anthropic(client)
elif provider_lower in ("google", "gemini"):
    if _is_new_google_genai_client(client):
        return instructor.from_genai(client)
    else:
        return instructor.from_gemini(client)
elif provider_lower == "litellm":
    return instructor.from_litellm(client, mode=mode)
elif provider_lower == "perplexity":
    return instructor.from_perplexity(client)
else:
    return _patch_client_for_provider(client, provider_lower, mode=mode)

Google SDK deprecation note from `src/ragas/llms/base.py:548`:

# Note: The old SDK is deprecated (support ends Aug 2025). The new SDK is recommended

Common Errors

Error Message Cause Solution
`ValueError: Unable to detect API style for {provider} client` Client does not have `chat.completions.create` or `messages.create` Ensure you are passing the correct SDK client object
`ImportError: No module named 'anthropic'` Anthropic SDK not installed `pip install anthropic`
`HARM_CATEGORY_JAILBREAK` safety settings error Known upstream instructor bug with new Google genai SDK Use OpenAI-compatible endpoint with Gemini base URL as workaround
`ImportError: No module named 'oci'` Oracle Cloud SDK not installed `pip install "ragas[oci]"`

Compatibility Notes

  • OpenAI Mode.JSON: Ragas uses `instructor.Mode.JSON` by default instead of `Mode.TOOLS` because OpenAI function calling has issues with `Dict` type annotations in Pydantic models (returns empty `{}`). See GitHub issue #2490.
  • Google GenAI (new SDK): Has a known upstream instructor issue with safety settings (`HARM_CATEGORY_JAILBREAK`). Track at `github.com/567-labs/instructor/issues/1658`. Workaround: use OpenAI-compatible endpoint.
  • Google GenerativeAI (old SDK): Deprecated, support ends August 2025. Migrate to `google-genai`.
  • Anthropic/Bedrock via LlamaIndex: The `n` and `stop` parameters are silently dropped; only `temperature` is passed through.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment