Workflow:Langgenius Dify Workflow Builder and Execution
| Knowledge Sources | |
|---|---|
| Domains | LLM_Ops, Workflow_Orchestration, AI_Engineering |
| Last Updated | 2026-02-12 07:00 GMT |
Overview
End-to-end process for building, testing, and executing DAG-based AI workflows in Dify's visual workflow editor, from node composition through variable wiring to production publishing.
Description
This workflow covers the creation and execution of Dify's core product feature: the visual workflow editor. Users compose directed acyclic graphs (DAGs) of processing nodes on a canvas, wire variables between nodes, test individual nodes and the complete graph, then publish versioned workflows. The graph engine supports 40+ node types including LLM inference, code execution, HTTP requests, conditional branching, iteration loops, parallel execution, knowledge retrieval, and human-in-the-loop input. Workflows execute via a streaming event system that reports node-level progress in real time.
Usage
Execute this workflow when you need to build a multi-step AI application that goes beyond simple prompt-response patterns. This is appropriate for orchestrating complex processing pipelines, implementing conditional logic, connecting multiple LLM calls, integrating external APIs, or combining retrieval with generation. The workflow editor is the primary interface for building chatflows (conversational workflows) and standard workflows (single-run processing).
Execution Steps
Step 1: Create Workflow Application
Create a new application in workflow mode from the Dify console. Choose between a standard workflow (single-run, batch processing) or a chatflow (conversational interface with workflow logic). The canvas initializes with a Start node for input definition and an End node for output specification.
Workflow types:
- Standard Workflow: Single-run execution with defined inputs and outputs
- Chatflow: Conversational interface powered by workflow logic with memory management
- Both types share the same node library and execution engine
Step 2: Define Input Variables
Configure the Start node to define the workflow's input schema. Add typed input variables (text, number, file, select, etc.) that users or API callers provide when triggering the workflow. For chatflows, system variables like conversation history and user identity are automatically available.
Variable types:
- Text, paragraph, number, select (dropdown)
- File upload (single or multiple)
- System variables: conversation ID, user ID, dialogue count
- Environment variables for secrets and configuration
Step 3: Compose Node Graph
Drag nodes onto the canvas and connect them with edges to define the processing flow. Each node performs a specific operation and passes its output to downstream nodes via variable references. The editor supports branching (IF/ELSE), iteration (loops over arrays), parallel execution, and error handling paths.
Core node types:
- LLM: Invoke a language model with a prompt template
- Knowledge Retrieval: Query datasets for relevant context
- Code: Execute Python or JavaScript code snippets
- HTTP Request: Call external APIs
- IF/ELSE: Conditional branching based on variable values
- Iteration: Loop over array elements
- Parallel: Execute multiple branches concurrently
- Tool: Invoke built-in or custom tools
- Template Transform: Apply Jinja2 templates to data
- Variable Aggregator: Merge outputs from parallel branches
- Human Input: Pause execution for manual review or data entry
- Answer: Stream text output to the user (chatflows)
- End: Define final output variables (standard workflows)
Step 4: Wire Variables Between Nodes
Connect node outputs to downstream node inputs using the variable reference system. Each node exposes typed output variables that can be referenced by any downstream node. The variable inspector shows all available variables at each point in the graph, including outputs from all upstream nodes.
Variable wiring:
- Reference upstream outputs using node-scoped variable selectors
- Type checking ensures compatible connections
- System variables available throughout the graph
- Environment variables accessible from any node
- Conversation variables persist across chatflow turns
Step 5: Test Individual Nodes
Before testing the complete workflow, test individual nodes in isolation using the single-node run feature. Provide sample input values and examine the node's output, execution time, and token usage. This enables iterative development of each processing step.
Single-node testing:
- Provide mock input values for the node
- Execute the node in isolation
- Inspect output variables and their values
- Review execution metadata (duration, tokens, errors)
- Iterate on node configuration based on results
Step 6: Run and Debug Complete Workflow
Execute the entire workflow with test inputs to validate the end-to-end processing flow. The execution panel shows real-time progress through the graph, with each node's status, inputs, outputs, and execution time. Use the run log to trace variable values through the entire execution path.
Debugging features:
- Real-time node execution status on the canvas
- Streaming output display for LLM and Answer nodes
- Variable value inspection at each node
- Execution trace with timing information
- Error details and stack traces for failed nodes
- Conversation variable state for chatflows
Step 7: Publish and Version
Publish the tested workflow to make it available for production use. Each publish creates a versioned snapshot with optional release notes. Published workflows can be accessed via the web interface, API endpoints, or embedded widgets. Version history allows rollback to previous versions.
Publishing features:
- Create versioned snapshots with release notes
- Auto-save of draft changes during development
- Version history with infinite scroll pagination
- Restore previous versions when needed
- Published version serves production traffic while draft continues evolving