Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Infiniflow Ragflow Search Application Setup

From Leeroopedia
Knowledge Sources
Domains RAG, Search, Information_Retrieval
Last Updated 2026-02-12 06:00 GMT

Overview

End-to-end process for creating a RAG-powered search application in RAGFlow that provides AI-synthesized answers alongside traditional document retrieval results, with embeddable search interfaces.

Description

This workflow covers building a search application in RAGFlow. Unlike the conversational chat application, a search application provides a search-oriented interface where users submit queries and receive both AI-generated summary answers and a list of relevant document chunks with source references. The search application combines retrieval from linked knowledge bases with LLM-powered answer synthesis, mind map visualization, and document preview capabilities. Search applications can be embedded in external websites via iframe and shared publicly.

Usage

Execute this workflow when you need a search-style interface (as opposed to a conversational chat interface) for querying your document corpus. Use it for building internal knowledge portals, documentation search engines, research tools, or any application where users need to find specific information across a large document collection with AI-powered answer synthesis.

Execution Steps

Step 1: Create Search Application

Create a new search application by providing a name and optional description. This registers a new search application record in the system. The application can be configured with a custom avatar and description for display purposes.

Key considerations:

  • Search applications are distinct from chat applications in UI and interaction model
  • Each search application has its own configuration for retrieval and LLM settings
  • Multiple search applications can be created for different knowledge domains

Step 2: Link Knowledge Bases

Connect one or more knowledge bases to the search application. These knowledge bases define the corpus that will be searched when users submit queries. Multiple knowledge bases can be combined to broaden the search scope.

Key considerations:

  • At least one knowledge base with processed documents should be linked
  • Knowledge bases can be added or removed after creation
  • The search will query across all linked knowledge bases

Step 3: Configure Search and LLM Settings

Configure the retrieval parameters and LLM model for answer synthesis. Key settings include the LLM model for generating answers, similarity threshold, Top-N chunks, keyword vs. semantic search weight, reranking model, system prompt template, and empty response behavior. The search application also supports Tavily web search integration for supplementing knowledge base results.

Key considerations:

  • Similarity threshold and Top-N control the quality-quantity tradeoff of retrieved results
  • Keyword weight balances BM25 keyword matching against semantic similarity
  • A reranking model improves result ordering after initial retrieval
  • The system prompt determines how the LLM synthesizes answers from retrieved chunks
  • Tavily API key enables web search augmentation

Step 4: Test Search Queries

Submit test queries through the search interface to validate retrieval quality and answer synthesis. The interface displays the AI-generated answer with inline citations, a list of retrieved document chunks with relevance scores, and optional mind map visualization of the answer structure. Review the document preview to verify source accuracy.

What happens:

  • Query is processed through the retrieval pipeline (embedding, search, reranking)
  • Retrieved chunks are sent to the LLM with the system prompt for answer synthesis
  • The AI answer is displayed with inline reference citations
  • Retrieved document chunks are listed below with scores and source information
  • Mind map visualization optionally generated from the answer

Step 5: Embed and Share

Deploy the search application for use. Options include direct access via the RAGFlow UI, embedding in external websites using an auto-generated iframe code snippet, or sharing via a public URL. The embed modal provides the HTML code needed for integration.

Key considerations:

  • The embed code generates a self-contained iframe with the search interface
  • Public sharing creates an accessible URL that does not require RAGFlow login
  • The search application appearance can be customized for embedding

Execution Diagram

GitHub URL

Workflow Repository