Implementation:FlowiseAI Flowise GetEvaluation
Appearance
| Property | Value |
|---|---|
| Implementation Name | GetEvaluation |
| Implements | Principle:FlowiseAI_Flowise_Evaluation_Results_Analysis |
| Source | packages/ui/src/api/evaluations.js, packages/ui/src/views/evaluations/EvaluationResult.jsx |
| Repository | FlowiseAI/Flowise |
| Domain | API Client, Results Visualization |
| Last Updated | 2026-02-12 14:00 GMT |
Code Reference
Source Location
The evaluation retrieval API function is defined in packages/ui/src/api/evaluations.js at line 6. The results are rendered by the EvaluationResult.jsx component located at packages/ui/src/views/evaluations/EvaluationResult.jsx (approximately 1050 lines).
Signature
// packages/ui/src/api/evaluations.js:L6
const getEvaluation = (id) => client.get(`/evaluations/${id}`)
The API client is configured at packages/ui/src/api/client.js with a base URL of ${baseURL}/api/v1, making the full endpoint:
GET /api/v1/evaluations/{id}
Import
import evaluationApi from '@/api/evaluations'
UI Components
The results page imports and uses several visualization components:
// packages/ui/src/views/evaluations/EvaluationResult.jsx
import MetricsItemCard from '@/views/evaluations/MetricsItemCard'
import { ChartLatency } from '@/views/evaluations/ChartLatency'
import { ChartPassPrnt } from '@/views/evaluations/ChartPassPrnt'
import { ChartTokens } from '@/views/evaluations/ChartTokens'
import EvaluationResultSideDrawer from '@/views/evaluations/EvaluationResultSideDrawer'
import EvaluationResultVersionsSideDrawer from '@/views/evaluations/EvaluationResultVersionsSideDrawer'
import EvalsResultDialog from '@/views/evaluations/EvalsResultDialog'
Charts are rendered using the recharts library for interactive data visualization.
I/O Contract
getEvaluation
Inputs:
id(string, required): The unique identifier of the evaluation run to retrieve
Outputs:
Promise<{data: EvaluationDetail}>: Resolves with the full evaluation detail object containing:- rows (Array): Per-row evaluation results, each containing:
input(string): The original input prompt from the datasetexpectedOutput(string): The expected output from the datasetactualOutput(Object): The actual chatflow response(s), keyed by chatflow IDevaluatorResults(Object): Pass/fail results for each evaluatorlatency(number): API response time in millisecondstokenUsage(Object): Token consumption details (promptTokens, completionTokens, totalTokens)cost(number): Estimated cost for the row
- summary (Object): Aggregate metrics including pass count, fail count, error count, average latency, and average cost
- rows (Array): Per-row evaluation results, each containing:
Usage Examples
Fetching Evaluation Results
import evaluationApi from '@/api/evaluations'
// Retrieve the full evaluation results
const response = await evaluationApi.getEvaluation('eval-run-123')
const evaluationDetail = response.data
// Access summary metrics
const { passCount, failCount, errorCount, avgLatency, avgCost } = evaluationDetail.summary
// Iterate per-row results
evaluationDetail.rows.forEach((row) => {
console.log('Input:', row.input)
console.log('Expected:', row.expectedOutput)
console.log('Actual:', row.actualOutput)
console.log('Evaluators:', row.evaluatorResults)
})
Using with the useApi Hook in React
import React, { useEffect } from 'react'
import useApi from '@/hooks/useApi'
import evaluationApi from '@/api/evaluations'
const EvaluationResultPage = ({ evaluationId }) => {
const getEvaluationApi = useApi(evaluationApi.getEvaluation)
useEffect(() => {
getEvaluationApi.request(evaluationId)
}, [evaluationId])
if (getEvaluationApi.data) {
const evaluation = getEvaluationApi.data
// Render MetricsItemCard, ChartPassPrnt, ChartLatency, ChartTokens
// Render per-row results table
}
}
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment