Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:InternLM Lmdeploy Autoget Backend

From Leeroopedia


Knowledge Sources
Domains LLM_Inference, Architecture_Detection
Last Updated 2026-02-07 15:00 GMT

Overview

Concrete tool for automatically detecting model architecture and selecting the optimal inference backend provided by the LMDeploy library.

Description

The autoget_backend() and autoget_backend_config() functions read a model's HuggingFace configuration to determine which inference backend (TurboMind or PyTorch) supports the architecture. The companion check_vl_llm() function detects vision-language models. These functions are called internally during pipeline initialization.

Usage

Called automatically by the pipeline() factory function. Invoke directly when you need to programmatically determine backend compatibility before creating a pipeline.

Code Reference

Source Location

  • Repository: lmdeploy
  • File: lmdeploy/archs.py
  • Lines: L13-55 (autoget_backend), L58-93 (autoget_backend_config), L96-141 (get_task, check_vl_llm)

Signature

def autoget_backend(model_path: str) -> Literal['turbomind', 'pytorch']:
    """Auto-detect the best backend for the given model."""
    ...

def autoget_backend_config(
    model_path: str,
    backend_config: Optional[Union[TurbomindEngineConfig,
                                    PytorchEngineConfig]] = None
) -> Tuple[str, Union[TurbomindEngineConfig, PytorchEngineConfig]]:
    """Get backend name and validated config for a model."""
    ...

Import

from lmdeploy.archs import autoget_backend, autoget_backend_config

I/O Contract

Inputs

Name Type Required Description
model_path str Yes HuggingFace model ID or local directory
backend_config TurbomindEngineConfig or PytorchEngineConfig No User-specified config (overrides auto-detection)

Outputs

Name Type Description
backend str 'turbomind' or 'pytorch'
backend_config EngineConfig Validated engine configuration

Usage Examples

Check Backend Before Pipeline

from lmdeploy.archs import autoget_backend

# Check which backend will be used
backend = autoget_backend('internlm/internlm2_5-7b-chat')
print(f"Backend: {backend}")  # 'turbomind'

backend = autoget_backend('some/unsupported-model')
print(f"Backend: {backend}")  # 'pytorch' (fallback)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment