Heuristic:Explodinggradients Ragas Deprecation Migration Guide
| Knowledge Sources | |
|---|---|
| Domains | LLM_Evaluation, Debugging |
| Last Updated | 2026-02-10 12:00 GMT |
Overview
Migration guide for three major Ragas deprecations: evaluate() to @experiment decorator, ragas.metrics to ragas.metrics.collections, and LangChain/LlamaIndex wrappers to llm_factory().
Description
Ragas has undergone significant API changes with three major deprecations active simultaneously. All deprecated code still works but emits `DeprecationWarning` and will be removed in a future version. The deprecated evaluate/aevaluate functions are the most impactful since they are the primary evaluation entry point. The metrics migration affects 40+ named metrics. The LLM/embedding wrapper migration changes how users configure their LLM providers.
Usage
Apply this heuristic when upgrading an existing Ragas integration or starting a new project (use the modern APIs from the start). Also relevant when investigating DeprecationWarning messages in logs. Following these migrations will future-proof your code against breaking changes in the next major version.
The Insight (Rule of Thumb)
- Deprecation 1 -- Evaluation API:
- Old: `from ragas import evaluate; result = evaluate(dataset, metrics=[...])`
- New: `from ragas import experiment; @experiment; def eval_fn(row): ...`
- Trade-off: The `@experiment` decorator requires writing a function per evaluation scenario, but provides versioning, comparison, and backend persistence.
- Deprecation 2 -- Metrics Import Path:
- Old: `from ragas.metrics import Faithfulness, AnswerCorrectness`
- New: `from ragas.metrics.collections import Faithfulness, AnswerCorrectness`
- Trade-off: 40+ metric names affected. The old imports still work via `__getattr__` lazy loading but emit warnings.
- Deprecation 3 -- LLM/Embedding Wrappers:
- Old: `from ragas.llms import LangchainLLMWrapper; llm = LangchainLLMWrapper(ChatOpenAI())`
- New: `from ragas.llms import llm_factory; llm = llm_factory("gpt-4o-mini", client=OpenAI())`
- Trade-off: `llm_factory()` is simpler and supports more providers natively.
- Deprecation 4 -- Google GenAI SDK:
- Old: `import google.generativeai as genai` (old SDK)
- New: `from google import genai` (new SDK)
- Timeline: Old SDK support ends August 2025.
Reasoning
The `@experiment` decorator pattern is more composable and supports features that `evaluate()` cannot: automatic experiment versioning via git, result persistence to backends (CSV, JSONL, Google Drive), and side-by-side experiment comparison. The metrics migration consolidates the metric collection into a more organized package structure. The LLM wrapper migration removes the LangChain dependency from the critical path and allows direct use of provider SDKs.
Code Evidence
evaluate() deprecation from `src/ragas/evaluation.py:105-110`:
warnings.warn(
"aevaluate() is deprecated and will be removed in a future version. "
"Use the @experiment decorator instead. "
"See https://docs.ragas.io/en/latest/concepts/experiment/ for more information.",
DeprecationWarning,
stacklevel=2,
)
Metrics deprecation from `src/ragas/metrics/__init__.py:125-198`:
_DEPRECATED_METRICS = {
"AnswerCorrectness": _AnswerCorrectness,
"Faithfulness": _Faithfulness,
# ... 40+ metric names
}
def __getattr__(name: str):
if name in _DEPRECATED_METRICS:
warnings.warn(
_DEPRECATION_MESSAGE.format(name=name),
DeprecationWarning,
stacklevel=2,
)
return _DEPRECATED_METRICS[name]
LLM wrapper deprecation from `src/ragas/llms/base.py:158-164`:
warnings.warn(
"LangchainLLMWrapper is deprecated. Use llm_factory() instead. "
"Example: from ragas.llms import llm_factory; "
"llm = llm_factory('gpt-4o-mini', client=OpenAI(api_key='...'))",
DeprecationWarning,
stacklevel=2,
)
LangChain prompt usage deprecation from `src/ragas/prompt/pydantic_prompt.py:62-70`:
warnings.warn(
"Direct usage of LangChain LLMs with Ragas prompts is deprecated and will be removed in a future version. "
"Use Ragas LLM interfaces instead: "
"from openai import OpenAI; from ragas.llms import llm_factory; "
"client = OpenAI(api_key='...'); llm = llm_factory('gpt-4o-mini', client=client)",
DeprecationWarning,
stacklevel=3,
)