Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Environment:Mlc ai Mlc llm TVM Runtime Environment

From Leeroopedia


Knowledge Sources
Domains Infrastructure, Compiler
Last Updated 2026-02-09 19:00 GMT

Overview

Apache TVM compiler and runtime environment that serves as the core compilation and execution backbone for all MLC-LLM model deployments across all platforms.

Description

MLC-LLM is built on top of the Apache TVM machine learning compiler stack. The TVM runtime provides the cross-platform execution layer (Relax VM) that runs compiled model libraries on all supported backends (CUDA, Metal, OpenCL, Vulkan, WebGPU, CPU). The TVM compiler provides the IR (Relax + TIR), optimization passes, and code generation for target-specific kernels. The `apache-tvm-ffi` package provides the Python FFI bindings, while the C++ runtime is linked into the `mlc_llm` shared library via CMake.

Usage

This environment is always required for any MLC-LLM operation. It is the foundational dependency for model compilation, weight conversion, engine initialization, and inference. Without TVM, no MLC-LLM functionality is available.

System Requirements

Category Requirement Notes
OS Linux, macOS, Windows, Android, iOS Cross-platform
Hardware CPU minimum; GPU for acceleration See platform-specific environments
Build System CMake < 4.0, C++17 compiler Required for building from source
Python >= 3.9 Required for Python API

Dependencies

System Packages

  • `cmake` < 4.0
  • C++17 compatible compiler (gcc, clang, MSVC)
  • `git`
  • `bzip2`

Build Dependencies

  • `scikit-build-core` >= 0.10.0 (Python wheel build)

Python Packages

  • `apache-tvm-ffi` (TVM FFI bindings, the core runtime interface)
  • `tvm` (full TVM package, for compilation)

Credentials

No credentials required.

Quick Install

# Install TVM FFI (runtime only)
pip install apache-tvm-ffi

# Or build MLC-LLM from source (includes TVM)
git clone --recursive https://github.com/mlc-ai/mlc-llm
cd mlc-llm
pip install .

Code Evidence

TVM is loaded at module initialization in `base.py:7-24`:

import tvm
import tvm.base

from . import libinfo

SKIP_LOADING_MLCLLM_SO = os.environ.get("SKIP_LOADING_MLCLLM_SO", "0")

def _load_mlc_llm_lib():
    """Load MLC LLM lib"""
    if sys.platform.startswith("win32") and sys.version_info >= (3, 8):
        for path in libinfo.get_dll_directories():
            os.add_dll_directory(path)
    lib_name = "mlc_llm" if tvm.base._RUNTIME_ONLY else "mlc_llm_module"
    lib_path = libinfo.find_lib_path(lib_name, optional=False)
    return ctypes.CDLL(lib_path[0]), lib_path[0]

TVM compilation pipeline registration from `pipeline.py:81-94`:

@register_pipeline("mlc_llm")
def _mlc_llm_pipeline(
    target: tvm.target.Target,
    flashinfer: bool = False,
    cublas_gemm: bool = False,
    faster_transformer: bool = False,
    allreduce_strategy: IPCAllReduceStrategyType = IPCAllReduceStrategyType.NONE,
    ...
):

Build system configuration from `pyproject.toml:65-81`:

[build-system]
requires = ["scikit-build-core>=0.10.0"]
build-backend = "scikit_build_core.build"

[tool.scikit-build]
cmake.source-dir = "."
cmake.build-type = "Release"
cmake.args = ["-DMLC_LLM_BUILD_PYTHON_MODULE=ON"]

Common Errors

Error Message Cause Solution
`ImportError: No module named 'tvm'` TVM not installed Install via `pip install apache-tvm-ffi` or build from source
`Cannot find library: mlc_llm` Shared library not built Build MLC-LLM from source or install the wheel package
CMake configuration error CMake version >= 4.0 Install `cmake` < 4.0 (specified in `build-environment.yaml`)

Compatibility Notes

  • Runtime vs Full TVM: The `apache-tvm-ffi` package provides runtime-only bindings. Full TVM (`tvm` package) is needed for model compilation.
  • Python Version: Requires Python >= 3.9 (specified in `pyproject.toml`).
  • Windows DLL Loading: On Windows with Python >= 3.8, DLL directories are explicitly added via `os.add_dll_directory()`.
  • Library Naming: Runtime-only builds load `mlc_llm`; full builds load `mlc_llm_module`.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment