Implementation:Langgenius Dify FetchPromptTemplate
| Knowledge Sources | Domains | Last Updated |
|---|---|---|
| Dify | LLM_Applications, Frontend, API | 2026-02-12 00:00 GMT |
Overview
Description
fetchPromptTemplate is a frontend service function that retrieves recommended prompt template configurations from the backend. Given the application mode, model mode, model name, and whether a knowledge base (dataset) is connected, it returns pre-built prompt templates appropriate for both chat-mode and completion-mode interactions. This allows the platform to provide sensible defaults and scaffolding for prompt design based on the current application context.
The function queries the /app/prompt-templates endpoint with the configuration parameters, and the backend returns both a ChatPromptConfig (for chat-mode models) and a CompletionPromptConfig (for completion-mode models), plus any recommended stop sequences.
Additionally, the useDebugConfigurationContext hook (defined in web/context/debug-configuration.ts, lines 31-109) provides the React context that holds the full prompt editing state during the configuration workflow. This context stores the prompt mode (simple vs. advanced), current advanced prompt content, conversation history roles, block insertion status, model configuration, dataset connections, completion parameters, and all feature toggles. It serves as the centralized state management layer for the prompt design and debug configuration UI.
Usage
Call fetchPromptTemplate when the user changes the application mode, model, or dataset connection status to retrieve appropriate prompt template recommendations. Use useDebugConfigurationContext in any React component within the configuration workflow to access or modify the current prompt state.
Code Reference
Source Location
web/service/debug.ts, lines 86-100
Signature
export const fetchPromptTemplate = ({
appMode,
mode,
modelName,
hasSetDataSet,
}: {
appMode: AppModeEnum
mode: ModelModeType
modelName: string
hasSetDataSet: boolean
}) => {
return get<Promise<{
chat_prompt_config: ChatPromptConfig
completion_prompt_config: CompletionPromptConfig
stop: []
}>>('/app/prompt-templates', {
params: {
app_mode: appMode,
model_mode: mode,
model_name: modelName,
has_context: hasSetDataSet,
},
})
}
Related: useDebugConfigurationContext (web/context/debug-configuration.ts, lines 31-109):
type IDebugConfiguration = {
appId: string
mode: AppModeEnum
modelModeType: ModelModeType
promptMode: PromptMode
setPromptMode: (promptMode: PromptMode) => void
isAdvancedMode: boolean
chatPromptConfig: ChatPromptConfig
completionPromptConfig: CompletionPromptConfig
currentAdvancedPrompt: PromptItem | PromptItem[]
setCurrentAdvancedPrompt: (prompt: PromptItem | PromptItem[], isUserChanged?: boolean) => void
conversationHistoriesRole: ConversationHistoriesRole
setConversationHistoriesRole: (role: ConversationHistoriesRole) => void
hasSetBlockStatus: BlockStatus
modelConfig: ModelConfig
setModelConfig: (modelConfig: ModelConfig) => void
completionParams: FormValue
setCompletionParams: (completionParams: FormValue) => void
dataSets: DataSet[]
setDataSets: (dataSet: DataSet[]) => void
// ... additional state properties
}
export const useDebugConfigurationContext = () => useContext(DebugConfigurationContext)
Import
import { fetchPromptTemplate } from '@/service/debug'
import { useDebugConfigurationContext } from '@/context/debug-configuration'
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
| appMode | AppModeEnum |
Yes | The application mode: 'completion', 'workflow', 'chat', 'advanced-chat', or 'agent-chat'.
|
| mode | ModelModeType |
Yes | The model interaction mode: 'chat', 'completion', or (unset).
|
| modelName | string |
Yes | The name of the selected model (e.g., 'gpt-3.5-turbo', 'gpt-4').
|
| hasSetDataSet | boolean |
Yes | Whether a knowledge base (dataset) has been connected to the application. When true, the returned template includes context injection placeholders.
|
Outputs
| Field | Type | Description |
|---|---|---|
| chat_prompt_config | ChatPromptConfig |
A prompt configuration for chat-mode models, containing an array of PromptItem objects with role ('system', 'user', 'assistant') and text fields.
|
| completion_prompt_config | CompletionPromptConfig |
A prompt configuration for completion-mode models, containing a single PromptItem (prompt text) and conversation_histories_role with user_prefix and assistant_prefix strings.
|
| stop | [] |
An array of recommended stop sequences (may be empty). |
Usage Examples
Fetching a prompt template for a chat app with a connected dataset
import { fetchPromptTemplate } from '@/service/debug'
import { AppModeEnum, ModelModeType } from '@/types/app'
const templates = await fetchPromptTemplate({
appMode: AppModeEnum.CHAT,
mode: ModelModeType.chat,
modelName: 'gpt-4',
hasSetDataSet: true,
})
// Use the chat prompt config for a chat-mode model
console.log(templates.chat_prompt_config.prompt)
// [{ role: 'system', text: '...' }, { role: 'user', text: '...' }]
Accessing prompt state in a React component
import { useDebugConfigurationContext } from '@/context/debug-configuration'
function PromptEditor() {
const {
promptMode,
setPromptMode,
currentAdvancedPrompt,
setCurrentAdvancedPrompt,
chatPromptConfig,
completionPromptConfig,
modelConfig,
} = useDebugConfigurationContext()
// Read and modify prompt state through the context
}