Implementation:InternLM Lmdeploy Pipeline Factory AWQ
Appearance
| Knowledge Sources | |
|---|---|
| Domains | LLM_Inference, Quantization |
| Last Updated | 2026-02-07 15:00 GMT |
Overview
Concrete tool for creating inference pipelines for AWQ/GPTQ quantized models using the TurboMind backend provided by the LMDeploy library.
Description
This is the pipeline() factory function used specifically for AWQ or GPTQ quantized model inference. The critical configuration is setting model_format='awq' (or 'gptq') in TurbomindEngineConfig so the engine loads weights in the correct quantized format and uses INT4 GEMM kernels.
Usage
Use this after quantizing a model with auto_awq (or after obtaining a pre-quantized AWQ/GPTQ model). Always specify model_format in the backend configuration.
Code Reference
Source Location
- Repository: lmdeploy
- File: lmdeploy/api.py L15-74, lmdeploy/messages.py L183-295
Signature
# Same pipeline() factory with AWQ-specific configuration
pipe = pipeline(
model_path,
backend_config=TurbomindEngineConfig(model_format='awq')
)
Import
from lmdeploy import pipeline, TurbomindEngineConfig
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_path | str | Yes | Path to AWQ-quantized model directory |
| backend_config | TurbomindEngineConfig | Yes | Must have model_format='awq' or 'gptq' |
Outputs
| Name | Type | Description |
|---|---|---|
| Pipeline | Pipeline | Inference pipeline with INT4 GEMM kernels active |
Usage Examples
from lmdeploy import pipeline, TurbomindEngineConfig
# Load AWQ quantized model
backend_config = TurbomindEngineConfig(
model_format='awq',
tp=1,
session_len=4096,
cache_max_entry_count=0.9
)
pipe = pipeline('./internlm2_5-7b-4bit', backend_config=backend_config)
responses = pipe(['Explain gravity', 'What is DNA?'])
for r in responses:
print(r.text)
pipe.close()
Related Pages
Implements Principle
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment