Implementation:PacktPublishing LLM Engineers Handbook QueryExpansion Generate
Appearance
| Field | Value |
|---|---|
| Type | API Doc |
| Workflow | RAG_Inference |
| Repository | PacktPublishing/LLM-Engineers-Handbook |
| Source | query_expanison.py:L13-38 |
| Implements | Principle:PacktPublishing_LLM_Engineers_Handbook_Query_Expansion |
API Signature
QueryExpansion.generate(self, query: Query, expand_to_n: int = 3) -> list[Query]
Import
from llm_engineering.application.rag.query_expanison import QueryExpansion
Key Code
class QueryExpansion(RAGStep):
@opik.track(name="QueryExpansion.generate")
def generate(self, query: Query, expand_to_n: int = 3) -> list[Query]:
prompt = QueryExpansionTemplate().create_template(expand_to_n=expand_to_n)
chain = prompt | self._llm
response = chain.invoke({"question": query.content})
# Parses response into list of query strings
queries = [
Query(
content=q,
author_id=query.author_id,
author_full_name=query.author_full_name,
)
for q in result
]
queries.append(query) # Include original
return queries
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| query | Query | (required) | The original user query object |
| expand_to_n | int | 3 | Total number of query variants to produce (including the original) |
Inputs and Outputs
Inputs:
- query: Query - The original query with
content,author_id, andauthor_full_namefields - expand_to_n: int - The number of expanded queries to generate (default: 3)
Outputs:
- list[Query] - A list of Query objects including:
- The original query (appended last)
- N-1 LLM-generated reformulations, each carrying forward the same
author_idandauthor_full_namemetadata
How It Works
- A QueryExpansionTemplate generates a prompt instructing the LLM to create
expand_to_nalternative phrasings of the original question - The prompt is chained with the LLM via LangChain's pipe operator
- The LLM response is parsed to extract individual query strings
- Each expanded query string is wrapped in a new Query object, preserving the author metadata from the original query
- The original query is appended to the list to ensure it is always included in retrieval
- All queries are returned for parallel vector search
External Dependencies
- langchain_openai (ChatOpenAI) - LLM used for generating query expansions
- opik - Observability and tracing decorator
- loguru - Structured logging
Source File
llm_engineering/application/rag/query_expanison.py(lines 13-38)
Note: The filename contains a typo (query_expanison.py instead of query_expansion.py) which is preserved from the original repository.
See Also
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment