Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Wandb Weave OpenAI Patcher

From Leeroopedia
Knowledge Sources
Domains Observability, LLM_Operations
Last Updated 2026-02-14 00:00 GMT

Overview

Wrapper documentation for the OpenAI SDK integration patcher provided by the Wandb Weave library.

Description

get_openai_patcher() returns a MultiPatcher containing 14 SymbolPatcher instances that wrap OpenAI SDK methods (sync and async variants) for chat completions, parse, moderations, embeddings, and responses. The patcher uses create_wrapper_sync and create_wrapper_async to build weave.op-decorated wrappers, and openai_accumulator to merge streaming ChatCompletionChunk objects.

This is a Wrapper Doc: it documents how Weave uses the external OpenAI SDK, not an API defined by Weave itself.

Usage

This patcher is invoked automatically by patch_openai() or implicit_patch(). No direct user interaction is needed.

Code Reference

Source Location

  • Repository: wandb/weave
  • File: weave/integrations/openai/openai_sdk.py
  • Lines: L732-896 (get_openai_patcher)
  • Lines: L389-476 (create_wrapper_sync/async)
  • Lines: L158-315 (openai_accumulator)

External Reference

Signature

def get_openai_patcher(
    settings: IntegrationSettings | None = None,
) -> MultiPatcher | NoOpPatcher:
    """Build a MultiPatcher for all OpenAI SDK methods.

    Patches 14 methods (sync + async) including:
    - chat.completions.create
    - chat.completions.parse
    - moderations.create
    - embeddings.create
    - responses.create
    - responses.parse
    """

Import

from weave.integrations.openai.openai_sdk import get_openai_patcher

I/O Contract

Inputs

Name Type Required Description
settings None No Configuration for the OpenAI integration

Outputs

Name Type Description
return NoOpPatcher Patcher containing 14 SymbolPatcher instances

Usage Examples

Transparent Tracing

import openai
import weave

weave.init("my-team/my-project")

# Normal OpenAI call - automatically traced
client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}],
)
# The call, inputs, outputs, and token usage are all captured in Weave

Streaming

import openai
import weave

weave.init("my-team/my-project")

client = openai.OpenAI()
stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True,
)
# Streaming chunks are accumulated into a complete response
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment