Implementation:Promptfoo Promptfoo runAssertions
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Evaluation, Quality_Assurance |
| Last Updated | 2026-02-14 08:00 GMT |
Overview
Concrete tool for running all assertions against a provider response and producing an aggregated grading result, provided by the Promptfoo framework.
Description
The runAssertions function takes a provider response and the test case's assertion list, runs each assertion through the appropriate matcher, and aggregates results into a single GradingResult. It handles threshold-based scoring, named metrics, and concurrent assertion execution.
Usage
Import this function when you need to grade a single provider response against a set of assertions. Called internally by runEval for each test case execution.
Code Reference
Source Location
- Repository: promptfoo
- File: src/assertions/index.ts
- Lines: L514-600
Signature
export async function runAssertions({
assertScoringFunction,
latencyMs,
prompt,
provider,
providerResponse,
test,
vars,
traceId,
}: {
assertScoringFunction?: ScoringFunction;
latencyMs?: number;
prompt?: string;
provider?: ApiProvider;
providerResponse: ProviderResponse;
test: AtomicTestCase;
vars?: Record<string, VarValue>;
traceId?: string;
}): Promise<GradingResult>
Import
import { runAssertions } from './assertions';
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| providerResponse | ProviderResponse | Yes | The LLM output to grade |
| test | AtomicTestCase | Yes | Test case containing the assertions array |
| prompt | string | No | The rendered prompt sent to the provider |
| provider | ApiProvider | No | The provider used (for model-graded assertions) |
| vars | Record<string, VarValue> | No | Template variables for assertion evaluation |
| latencyMs | number | No | Response latency for latency-based assertions |
| traceId | string | No | OpenTelemetry trace ID for correlation |
Outputs
| Name | Type | Description |
|---|---|---|
| (return) | GradingResult | Aggregated result with pass/fail, score, reason, and component results |
Usage Examples
Grade a Response
import { runAssertions } from './assertions';
const result = await runAssertions({
providerResponse: { output: 'The capital of France is Paris.' },
test: {
assert: [
{ type: 'contains', value: 'Paris' },
{ type: 'llm-rubric', value: 'Answer is factually correct' },
],
},
vars: { question: 'What is the capital of France?' },
});
console.log(result.pass); // true
console.log(result.score); // 1.0
Related Pages
Implements Principle
Requires Environment
Uses Heuristic
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment