Environment:Wandb Weave LLM Integration Dependencies
| Knowledge Sources | |
|---|---|
| Domains | LLMs, Integrations, Infrastructure |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
Optional dependency environment for LLM provider integrations (OpenAI, Anthropic, Cohere, etc.) and LLM-powered scorers requiring litellm.
Description
This environment extends the base Python SDK Runtime with optional dependencies required for specific LLM provider integrations. Weave supports automatic patching of LLM client libraries (OpenAI, Anthropic, Cohere, Google GenAI, Groq, Cerebras, etc.) to capture traces. Each integration requires its respective provider SDK to be installed. Additionally, LLM-powered scorers (e.g., `LLMScorer`) require litellm for unified LLM API access. Some integrations have specific Python version constraints.
Usage
Use this environment when you need to trace LLM API calls via Weave integrations or when using LLM-powered scorers for evaluation. Install only the specific provider packages you need. This is required alongside the base Python SDK Runtime.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| Python | CPython >= 3.10 | Some integrations require 3.11+ (see Compatibility Notes) |
| GPU | NVIDIA GPU (optional) | Only for local model-based scorers; CUDA detection warns if GPU available but CPU selected |
| Network | Internet access to LLM APIs | OpenAI, Anthropic, Cohere, Google, etc. |
Dependencies
LLM Provider Packages
- `openai` >= 1.0.0 — OpenAI integration
- `anthropic` >= 0.18.0 — Anthropic integration
- `cohere` >= 5.13.5 — Cohere integration (Python < 3.13)
- `google-genai` >= 1.0.0, <= 1.23.0 — Google GenAI integration
- `cerebras-cloud-sdk` — Cerebras integration
- `groq` — Groq integration
- `mistralai` >= 1.0.0 — Mistral integration
- `litellm` — Required for `LLMScorer` and unified LLM access
Framework Integrations
- `langchain-core` >= 0.3.29 — LangChain integration
- `dspy` >= 3.0.0 — DSPy integration
- `crewai` >= 0.100.1, <= 0.108.0 — CrewAI integration (CPython 3.10-3.13)
- `instructor` >= 1.4.3 — Instructor integration
Credentials
The following API keys are needed for their respective provider integrations:
- `OPENAI_API_KEY`: OpenAI API key
- `ANTHROPIC_API_KEY`: Anthropic API key
- `GOOGLE_API_KEY` or `GEMINI_API_KEY`: Google GenAI API key
- `MISTRAL_API_KEY`: Mistral API key
- `NVIDIA_API_KEY`: NVIDIA API key
Quick Install
# Install Weave with specific integrations
pip install "weave[openai]"
pip install "weave[anthropic]"
pip install "weave[cohere]"
pip install "weave[google_genai]"
# Install litellm for LLM-powered scorers
pip install litellm
Code Evidence
LLM scorer litellm requirement from `weave/scorers/scorer_types.py:40-49`:
def model_post_init(self, __context: Any) -> None:
try:
from litellm import acompletion, aembedding, amoderation
except ImportError:
raise ImportError(
"litellm is required to use the LLM-powered scorers, "
"please install it with `pip install litellm`"
) from None
CUDA detection for scorers from `weave/scorers/scorer_types.py:60-67`:
def check_cuda(device: str) -> None:
import torch
if torch.cuda.is_available() and device == "cpu":
warnings.warn(
"You have a GPU available, you can pass `device='cuda'` "
"to the scorer init, this will speed up model loading and inference",
stacklevel=2,
)
Sentinel value handling for provider SDKs from `weave/trace/op.py:100-109`:
_sentinels_to_check = [
Sentinel(package="openai", path="openai._types", name="NOT_GIVEN"),
Sentinel(package="openai", path="openai._types", name="omit"),
Sentinel(package="openai", path="openai._types", name="Omit"),
Sentinel(package="cohere", path="cohere.base_client", name="COHERE_NOT_GIVEN"),
Sentinel(package="anthropic", path="anthropic._types", name="NOT_GIVEN"),
Sentinel(package="cerebras", path="cerebras.cloud.sdk._types", name="NOT_GIVEN"),
]
Auto-patching configuration from `weave/trace/settings.py:74-80`:
implicitly_patch_integrations: bool = True
"""Toggles implicit patching of integrations.
If True, supported libraries (OpenAI, Anthropic, etc.) are automatically patched
when imported, regardless of import order. If False, you must explicitly call
patch functions like `weave.integrations.patch_openai()` to enable tracing."""
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `ImportError: litellm is required to use the LLM-powered scorers` | litellm not installed | `pip install litellm` |
| `InstructorLLMScorer is deprecated` | Using deprecated scorer class | Use `LLMScorer` instead (has built-in structured output support) |
| `You have a GPU available, you can pass device='cuda'` | Scorer using CPU when GPU available | Pass `device='cuda'` to scorer constructor |
| Integration not tracing calls | Auto-patching disabled or library not supported | Set `WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=true` or call `weave.integrations.patch_openai()` etc. |
Compatibility Notes
- Python 3.10: NotDiamond integration not supported.
- Python 3.11+: Verifiers integration requires Python 3.11.
- Python 3.13: Cohere, NotDiamond, Presidio, VertexAI, Verdict, and CrewAI integrations are not compatible.
- PyPy: CrewAI, LangChain Google VertexAI, NotDiamond, VertexAI, and Verdict integrations exclude PyPy.
- CrewAI: Pinned to 0.100.1-0.108.0 range pending upstream bugfix.
- Auto-patching: Weave automatically patches supported libraries via import hooks when `WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=true` (default). Set to `false` for explicit control.