Implementation:Wandb Weave Model
| Knowledge Sources | |
|---|---|
| Domains | Model_Architecture, Evaluation |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
Concrete tool for defining versioned, evaluable models provided by the Wandb Weave library.
Description
The Model base class (inheriting from Object) provides a standard interface for evaluable models. Subclasses define configuration as Pydantic fields and implement an inference method (predict, infer, forward, or invoke) decorated with @weave.op. The get_infer_method() discovers the correct method at runtime.
apply_model_async() handles async execution of the model with proper error handling, returning an ApplyModelSuccess (with output, call reference, latency) or ApplyModelError.
Usage
Subclass Model and implement a predict() method to create an evaluable model. The model can then be passed to Evaluation.evaluate().
Code Reference
Source Location
- Repository: wandb/weave
- File: weave/flow/model.py
- Lines: L25-177
Signature
class Model(Object):
"""Captures a combination of code and data that operates on an input.
When you change attributes or code, these changes will be logged
and the version will be updated for comparison across different versions.
"""
def get_infer_method(self) -> Callable:
"""Get inference method from Model instance.
Searches for: predict, infer, forward, invoke (in order).
"""
Import
import weave
# or
from weave import Model
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| (user-defined fields) | Any | Varies | Model configuration as Pydantic fields |
| predict/infer/forward/invoke | method | Yes | Inference method decorated with @weave.op |
Outputs
| Name | Type | Description |
|---|---|---|
| predict() return | Any | Model prediction (type defined by user) |
| apply_model_async return | ApplyModelError | Contains model_output, model_call, model_latency |
Usage Examples
Basic Model
import weave
class SentimentModel(weave.Model):
model_name: str
temperature: float = 0.7
@weave.op
async def predict(self, text: str) -> dict:
# Your inference logic here
return {"sentiment": "positive", "confidence": 0.95}
model = SentimentModel(model_name="gpt-4")
Using with Evaluation
import weave
import asyncio
weave.init("my-team/my-project")
class QAModel(weave.Model):
system_prompt: str
@weave.op
async def predict(self, question: str) -> dict:
return {"answer": "Paris"}
model = QAModel(system_prompt="Answer concisely.")
evaluation = weave.Evaluation(
dataset=[{"question": "Capital of France?", "expected": "Paris"}],
scorers=[],
)
asyncio.run(evaluation.evaluate(model))