Implementation:FlagOpen FlagEmbedding Reinforced IR Model
| Knowledge Sources | |
|---|---|
| Domains | Information Retrieval, Inference, Query Augmentation |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Inference wrapper that combines a retrieval model with an LLM-based query augmentation generator for Reinforced IR.
Description
This class provides a unified inference interface for the Reinforced IR system, managing both the retrieval model and the optional query augmentation generator. It handles lazy loading and memory management, loading models only when needed and offloading them to save GPU memory when switching between generation and retrieval. The system supports dynamic query augmentation where an LLM generates additional context for queries before embedding.
The model uses a two-stage retrieval process: first optionally generating augmented context for queries using an LLM with a task-specific prompt template, then encoding both the original query and augmentation using the retrieval model. The final query representation is a weighted combination (typically 80% original query, 20% augmentation). This approach improves retrieval performance by enriching queries with generated contextual information while maintaining the semantic signal from the original query.
Usage
Use this class for inference with Reinforced IR models, particularly when you want to leverage LLM-based query augmentation to improve retrieval performance on downstream tasks.
Code Reference
Source Location
- Repository: FlagOpen_FlagEmbedding
- File: research/Reinforced_IR/inference/ir_model.py
- Lines: 1-135
Signature
class Reinforced_IR_Model:
def __init__(
self,
model_name_or_path: str,
model_class: Optional[Union[str, EmbedderModelClass]] = None,
normalize_embeddings: bool = True,
use_fp16: bool = True,
query_instruction_for_retrieval: Optional[str] = None,
devices: Optional[Union[str, List[str]]] = None,
pooling_method: Optional[str] = None,
trust_remote_code: Optional[bool] = None,
query_instruction_format: Optional[str] = None,
generator_model_name_or_path: Optional[str] = None,
temperature: float = 1.0,
gpu_memory_utilization: float = 0.5,
tensor_parallel_size: int = None,
top_p: float = 1.0,
max_tokens: int = 300,
api_key: Optional[str] = None,
base_url: Optional[str] = None,
model_type: str = "llm_instruct",
**kwargs
)
def encode_queries(self, task_instruction, answer_type, queries, **kwargs):
"""Encode queries with optional augmentation"""
def encode_corpus(self, corpus, **kwargs):
"""Encode corpus passages"""
Import
from FlagEmbedding import FlagAutoModel
from agent import GPTAgent, LLMAgent, LLMInstructAgent
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_name_or_path | str | Yes | Path to retrieval model |
| generator_model_name_or_path | str | No | Path to LLM for query augmentation |
| task_instruction | str | Yes | Task description for augmentation prompt |
| answer_type | str | Yes | Type of augmentation to generate (e.g., "topic", "summary") |
| queries | List[str] | Yes | List of queries to encode |
| corpus | List[str] | Yes | List of passages to encode |
| temperature | float | No | LLM generation temperature (default: 1.0) |
| max_tokens | int | No | Max tokens for augmentation (default: 300) |
| model_type | str | No | LLM type: "llm", "llm_instruct", "gpt" (default: "llm_instruct") |
Outputs
| Name | Type | Description |
|---|---|---|
| query_embeddings | np.ndarray | Encoded query vectors (with augmentation if enabled) |
| corpus_embeddings | np.ndarray | Encoded corpus passage vectors |
Usage Examples
# Initialize with retrieval model only (no augmentation)
model = Reinforced_IR_Model(
model_name_or_path="BAAI/bge-base-en-v1.5",
normalize_embeddings=True,
use_fp16=True
)
# Encode without augmentation
corpus = ["Passage 1", "Passage 2", "Passage 3"]
queries = ["Query 1", "Query 2"]
corpus_embeddings = model.encode_corpus(corpus, batch_size=256)
query_embeddings = model.encode_queries(
task_instruction="",
answer_type="",
queries=queries,
batch_size=256
)
# Initialize with augmentation
model = Reinforced_IR_Model(
model_name_or_path="BAAI/bge-base-en-v1.5",
generator_model_name_or_path="Meta-Llama-3-8B-Instruct",
model_type="llm_instruct",
gpu_memory_utilization=0.5,
normalize_embeddings=True
)
# Encode with augmentation
query_embeddings = model.encode_queries(
task_instruction="fact verification",
answer_type="verification statement",
queries=["Is machine learning related to AI?"],
temperature=0.7,
max_tokens=200
)
# The prompt template used:
# "Given a retrieval task and a query, your mission is to generate
# a brief {answer_type} for the query in the context of the retrieval task.
# Task: {task_instruction}
# Query: {query}"
# Final embedding = 0.8 * original_query + 0.2 * augmentation