Implementation:FlowiseAI Flowise CreateEvaluator
Appearance
| Property | Value |
|---|---|
| Implementation Name | CreateEvaluator |
| Implements | Principle:FlowiseAI_Flowise_Evaluator_Definition |
| Source | packages/ui/src/api/evaluators.js, packages/ui/src/views/evaluators/evaluatorConstant.js |
| Repository | FlowiseAI/Flowise |
| Domain | API Client, Evaluator Management |
| Last Updated | 2026-02-12 14:00 GMT |
Code Reference
Source Location
The evaluator creation API function is defined in packages/ui/src/api/evaluators.js at line 6. Evaluator type definitions and constants are located in packages/ui/src/views/evaluators/evaluatorConstant.js at lines 2-143.
Signature
// packages/ui/src/api/evaluators.js:L6
const createEvaluator = (body) => client.post(`/evaluators`, body)
The API client is configured at packages/ui/src/api/client.js with a base URL of ${baseURL}/api/v1, making the full endpoint:
POST /api/v1/evaluators
Import
import evaluatorsApi from '@/api/evaluators'
Evaluator Constants
The evaluator type definitions are exported from packages/ui/src/views/evaluators/evaluatorConstant.js:
// packages/ui/src/views/evaluators/evaluatorConstant.js:L2-93
export const evaluators = [
{ type: 'text', name: 'ContainsAny', label: 'Contains Any', description: '...' },
{ type: 'text', name: 'ContainsAll', label: 'Contains All', description: '...' },
{ type: 'text', name: 'DoesNotContainAny', label: 'Does Not Contains Any', description: '...' },
{ type: 'text', name: 'DoesNotContainAll', label: 'Does Not Contains All', description: '...' },
{ type: 'text', name: 'StartsWith', label: 'Starts With', description: '...' },
{ type: 'text', name: 'NotStartsWith', label: 'Does Not Start With', description: '...' },
{ type: 'json', name: 'IsValidJSON', label: 'Is Valid JSON', description: '...' },
{ type: 'json', name: 'IsNotValidJSON', label: 'Is Not a Valid JSON', description: '...' },
{ type: 'numeric', name: 'totalTokens', label: 'Total Tokens', description: '...' },
{ type: 'numeric', name: 'promptTokens', label: 'Prompt Tokens', description: '...' },
{ type: 'numeric', name: 'completionTokens', label: 'Completion Tokens', description: '...' },
{ type: 'numeric', name: 'apiLatency', label: 'Total API Latency', description: '...' },
{ type: 'numeric', name: 'llm', label: 'LLM Latency', description: '...' },
{ type: 'numeric', name: 'chain', label: 'Chatflow Latency', description: '...' },
{ type: 'numeric', name: 'responseLength', label: 'Output Chars Length', description: '...' }
]
// packages/ui/src/views/evaluators/evaluatorConstant.js:L95-116
export const evaluatorTypes = [
{ label: 'Evaluate Result (Text Based)', name: 'text', description: '...' },
{ label: 'Evaluate Result (JSON)', name: 'json', description: '...' },
{ label: 'Evaluate Metrics (Numeric)', name: 'numeric', description: '...' },
{ label: 'LLM based Grading (JSON)', name: 'llm', description: '...' }
]
// packages/ui/src/views/evaluators/evaluatorConstant.js:L118-143
export const numericOperators = [
{ label: 'Equals', name: 'equals' },
{ label: 'Not Equals', name: 'notEquals' },
{ label: 'Greater Than', name: 'greaterThan' },
{ label: 'Less Than', name: 'lessThan' },
{ label: 'Greater Than or Equals', name: 'greaterThanOrEquals' },
{ label: 'Less Than or Equals', name: 'lessThanOrEquals' }
]
I/O Contract
createEvaluator
Inputs:
body(Object):name(string, required): Display name of the evaluatortype(string, required): One of'text','json','numeric', or'llm'operator(string, optional): The specific evaluator operator (e.g.,'ContainsAny','StartsWith'for text;'IsValidJSON'for json)value(string, optional): The comparison value (e.g., comma-separated keywords for text evaluators, threshold for numeric)measure(string, optional, for numeric): The metric to measure (e.g.,'totalTokens','apiLatency','responseLength')outputSchema(Array, optional, for LLM): Structured output schema for LLM grading responseprompt(string, optional, for LLM): Custom prompt template for LLM-based grading
Outputs:
Promise<{data: {id: string}}>: Resolves with the created evaluator's unique identifier
Usage Examples
Creating a Text Evaluator
import evaluatorsApi from '@/api/evaluators'
// Create a text evaluator that checks for keyword presence
const response = await evaluatorsApi.createEvaluator({
name: 'Contains Greeting',
type: 'text',
operator: 'ContainsAny',
value: 'hello,hi,welcome'
})
const evaluatorId = response.data.id
Creating a Numeric Evaluator
import evaluatorsApi from '@/api/evaluators'
// Create a numeric evaluator for latency threshold
await evaluatorsApi.createEvaluator({
name: 'Latency Under 2s',
type: 'numeric',
measure: 'apiLatency',
operator: 'lessThan',
value: '2000'
})
Creating an LLM Evaluator
import evaluatorsApi from '@/api/evaluators'
// Create an LLM-based evaluator for response relevance
await evaluatorsApi.createEvaluator({
name: 'Relevance Grader',
type: 'llm',
prompt: 'Grade the following response for relevance to the question. Input: {{input}} Expected: {{expectedOutput}} Actual: {{actualOutput}}',
outputSchema: [
{ key: 'score', type: 'number', description: 'Relevance score 0-10' },
{ key: 'reasoning', type: 'string', description: 'Explanation of the grade' },
{ key: 'pass', type: 'boolean', description: 'Whether the response passes' }
]
})
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment