Principle:ContextualAI HALOs Supervised Finetuning
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, NLP, Training |
| Last Updated | 2026-02-08 03:00 GMT |
Overview
A training method that adapts a pre-trained language model to follow instructions by maximizing the log-likelihood of desired output sequences given input prompts.
Description
Supervised Fine-Tuning (SFT) is the standard first step in the LLM alignment pipeline. Given a pre-trained language model and a dataset of (prompt, desired_response) pairs, SFT trains the model to generate the desired response by minimizing the negative log-likelihood (NLL) of the target tokens. Unlike pre-training which operates on raw text, SFT specifically conditions generation on instructional prompts, teaching the model to be a helpful assistant.
SFT serves as the foundation upon which preference-based alignment methods (DPO, KTO, GRPO, PPO) are subsequently applied. The quality of the SFT stage significantly affects downstream alignment performance, as it establishes the model's basic instruction-following capability.
Usage
Use SFT as the first training step before applying any preference alignment method. SFT is appropriate when you have a pre-trained base model (e.g., Llama, Gemma, Mistral) and a dataset of high-quality instruction-response pairs. The resulting SFT checkpoint is used as both the starting policy and the reference model for subsequent alignment training.
Theoretical Basis
The SFT loss is the standard autoregressive language modeling objective restricted to the target tokens:
Where:
- is the input prompt
- is the target response
- are the model parameters
- The loss is normalized by the number of target tokens
Only the target response tokens contribute to the loss; prompt tokens are masked with label value -100.