Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Axolotl ai cloud Axolotl Save Trained Model

From Leeroopedia


Knowledge Sources
Domains Model_Persistence, Training_Pipeline
Last Updated 2026-02-06 23:00 GMT

Overview

Concrete tool for saving trained model weights with support for adapters, distributed training, and multiple serialization formats provided by the Axolotl framework.

Description

The save_trained_model function handles the complexity of saving models trained under various configurations. It detects the training mode (adapter vs. full, distributed vs. single GPU) and applies the appropriate saving strategy. For adapter training, it saves only the adapter weights. For FSDP training, it handles state dict gathering. For DeepSpeed, it coordinates with the DeepSpeed engine. The function also handles safe serialization, tokenizer saving, and cleanup of temporary files.

Usage

This function is called automatically at the end of Axolotl training. It can also be called manually to save model checkpoints at custom points.

Code Reference

Source Location

  • Repository: axolotl
  • File: src/axolotl/train.py
  • Lines: L213-340

Signature

def save_trained_model(
    cfg: DictDefault,
    trainer: Any,
    model: PreTrainedModel,
) -> None:
    """Save a trained model to disk.

    Args:
        cfg: Configuration with output_dir, adapter type, distributed settings.
        trainer: Trainer instance (for distributed save coordination).
        model: The trained model to save.
    """

Import

from axolotl.train import save_trained_model

I/O Contract

Inputs

Name Type Required Description
cfg DictDefault Yes Config with output_dir, adapter type (lora/qlora/None), fsdp_config, deepspeed settings
trainer Any Yes Trainer instance for distributed save coordination
model PreTrainedModel Yes The trained model (PeftModel for adapter training)

Outputs

Name Type Description
files Directory Model weights saved to cfg.output_dir (adapter_model.safetensors for LoRA, model.safetensors for full)

Usage Examples

Saving After Training

from axolotl.train import save_trained_model

# After training completes
save_trained_model(cfg, trainer, model)
# Adapter weights saved to cfg.output_dir/adapter_model.safetensors
# Tokenizer saved to cfg.output_dir/tokenizer.json

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment