Implementation:BerriAI Litellm Create Fine Tuning Job
| Knowledge Sources | Domains | Last Updated |
|---|---|---|
| BerriAI/litellm | Fine-Tuning, API Integration, Model Customization | 2026-02-15 |
Overview
Concrete tool for creating fine-tuning jobs across multiple LLM providers provided by LiteLLM.
Description
The create_fine_tuning_job function in LiteLLM initiates a fine-tuning process that customizes a base model using an uploaded training dataset. It accepts a model name, training file ID, optional hyperparameters, and a provider specification, then routes the request to the appropriate provider-specific handler (OpenAI, Azure OpenAI, or Vertex AI). The function constructs a typed FineTuningJobCreate payload, resolves API credentials through a priority chain, and returns a normalized LiteLLMFineTuningJob object containing the job ID, status, and metadata. Both synchronous (create_fine_tuning_job) and asynchronous (acreate_fine_tuning_job) variants are available.
Usage
Use create_fine_tuning_job when:
- Starting a fine-tuning job after uploading training data via
create_file. - Needing to customize a base model with domain-specific examples.
- Building provider-agnostic fine-tuning workflows that can target OpenAI, Azure, or Vertex AI.
Code Reference
Source Location
litellm/fine_tuning/main.py (lines 37-285)
Signature
# Async version
async def acreate_fine_tuning_job(
model: str,
training_file: str,
hyperparameters: Optional[dict] = {},
suffix: Optional[str] = None,
validation_file: Optional[str] = None,
integrations: Optional[List[str]] = None,
seed: Optional[int] = None,
custom_llm_provider: Literal["openai", "azure", "vertex_ai"] = "openai",
extra_headers: Optional[Dict[str, str]] = None,
extra_body: Optional[Dict[str, str]] = None,
**kwargs,
) -> LiteLLMFineTuningJob:
# Sync version
def create_fine_tuning_job(
model: str,
training_file: str,
hyperparameters: Optional[dict] = {},
suffix: Optional[str] = None,
validation_file: Optional[str] = None,
integrations: Optional[List[str]] = None,
seed: Optional[int] = None,
custom_llm_provider: Literal["openai", "azure", "vertex_ai"] = "openai",
extra_headers: Optional[Dict[str, str]] = None,
extra_body: Optional[Dict[str, str]] = None,
**kwargs,
) -> Union[LiteLLMFineTuningJob, Coroutine[Any, Any, LiteLLMFineTuningJob]]:
Import
from litellm.fine_tuning.main import create_fine_tuning_job, acreate_fine_tuning_job
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | str |
Yes | The name of the base model to fine-tune (e.g., "gpt-3.5-turbo", "gpt-4o-mini-2024-07-18"). |
| training_file | str |
Yes | The ID of an uploaded file containing training data (obtained from create_file).
|
| hyperparameters | Optional[dict] |
No | Dictionary with keys "batch_size", "learning_rate_multiplier", "n_epochs". Defaults to empty dict. |
| suffix | Optional[str] |
No | A string (up to 18 characters) appended to the fine-tuned model name for identification. |
| validation_file | Optional[str] |
No | The ID of an uploaded file containing validation data for evaluating training quality. |
| integrations | Optional[List[str]] |
No | A list of integrations to enable for the fine-tuning job. |
| seed | Optional[int] |
No | Seed for reproducibility of the training job. |
| custom_llm_provider | Literal["openai", "azure", "vertex_ai"] |
No | The LLM provider to use. Defaults to "openai". |
| extra_headers | Optional[Dict[str, str]] |
No | Additional HTTP headers to include in the request. |
| extra_body | Optional[Dict[str, str]] |
No | Additional fields to include in the request body. |
| **kwargs | various | No | Additional parameters including api_key, api_base, api_version, timeout, organization, and provider-specific options.
|
Outputs
| Return Type | Description |
|---|---|
LiteLLMFineTuningJob |
A normalized fine-tuning job object (extends OpenAI's FineTuningJob) containing: id (job identifier), model (base model), status (e.g., "validating_files", "queued", "running", "succeeded", "failed"), training_file, created_at, hyperparameters, fine_tuned_model (available upon completion), and error (if failed). Includes a _hidden_params dict for internal metadata and an optional seed field.
|
Usage Examples
Create a basic fine-tuning job with OpenAI
from litellm.fine_tuning.main import create_fine_tuning_job
job = create_fine_tuning_job(
model="gpt-3.5-turbo",
training_file="file-abc123",
custom_llm_provider="openai",
)
print(job.id) # e.g., "ftjob-abc123"
print(job.status) # e.g., "validating_files"
print(job.model) # "gpt-3.5-turbo"
Create a job with hyperparameters and validation
from litellm.fine_tuning.main import create_fine_tuning_job
job = create_fine_tuning_job(
model="gpt-4o-mini-2024-07-18",
training_file="file-abc123",
hyperparameters={
"batch_size": "auto",
"learning_rate_multiplier": 0.1,
"n_epochs": 3,
},
suffix="my-custom-model",
validation_file="file-xyz789",
seed=42,
custom_llm_provider="openai",
)
print(job.id)
print(job.hyperparameters)
Async job creation with Azure OpenAI
import asyncio
from litellm.fine_tuning.main import acreate_fine_tuning_job
async def start_fine_tuning():
job = await acreate_fine_tuning_job(
model="gpt-35-turbo",
training_file="file-abc123",
custom_llm_provider="azure",
api_base="https://my-resource.openai.azure.com/",
api_key="my-azure-api-key",
api_version="2024-02-01",
)
return job
result = asyncio.run(start_fine_tuning())
print(result.id)
print(result.status)
Create a Vertex AI fine-tuning job
from litellm.fine_tuning.main import create_fine_tuning_job
job = create_fine_tuning_job(
model="gemini-1.0-pro-002",
training_file="file-vertex-123",
custom_llm_provider="vertex_ai",
vertex_project="my-gcp-project",
vertex_location="us-central1",
)
print(job.id)
print(job.status)