Implementation:SeldonIO Seldon core Seldon Model Load
Appearance
| Property | Value |
|---|---|
| Implementation Name | Seldon_Model_Load |
| Type | External Tool Doc |
| Overview | Concrete CLI tool for loading ML models onto Seldon Core 2 inference servers. |
| Implements Principle | SeldonIO_Seldon_core_Model_Deployment_Execution |
| Workflow | Model_Deployment |
| Domains | MLOps, Kubernetes |
| Source | docs-gb/cli/seldon_model_load.md:L1-30
|
| External Dependencies | seldon CLI, kubectl |
| Last Updated | 2026-02-13 00:00 GMT |
Description
The seldon model load command is the primary CLI interface for deploying models onto Seldon Core 2 inference servers. It reads a Model CRD YAML file and submits it to the Seldon scheduler, which initiates the model loading process on a compatible Server. Alternatively, the same effect can be achieved using kubectl apply -f which submits the Model resource through the Kubernetes API.
Code Reference
Source: docs-gb/cli/seldon_model_load.md:L1-30
CLI Signature:
seldon model load -f <model.yaml> [--scheduler-host string] [--force]
Alternative (kubectl):
kubectl apply -f <model.yaml>
Key Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
-f / --file-path |
string | (required) | Path to the Model CRD YAML manifest file |
--scheduler-host |
string | "0.0.0.0:9004" |
Address of the Seldon scheduler gRPC endpoint |
--force |
boolean | false | Force control plane mode for the load operation |
-h / --help |
boolean | false | Display help information for the command |
I/O Contract
Inputs
| Input | Type | Description |
|---|---|---|
| Model CRD YAML file | File path | A valid Seldon Core 2 Model manifest (apiVersion: mlops.seldon.io/v1alpha1, kind: Model) |
| Running Seldon Core 2 cluster | Infrastructure | A Kubernetes cluster with Seldon Core 2 operator and scheduler running |
Outputs
| Output | Type | Description |
|---|---|---|
| Model registration | Scheduler event | Model registered with the Seldon scheduler, loading initiated on a matching Server |
| Empty JSON response | JSON | {} on successful submission to the scheduler
|
Usage Examples
Basic Model Load
# Load a model from a YAML manifest
seldon model load -f samples/models/sklearn1.yaml
Load with Custom Scheduler Host
# When scheduler is accessible at a non-default address
seldon model load -f model.yaml --scheduler-host scheduler.seldon-mesh:9004
Load via kubectl
# Submit the Model CRD through the Kubernetes API
kubectl apply -f samples/models/sklearn1.yaml
Full Deployment Workflow
# Step 1: Load the model
seldon model load -f samples/models/sklearn1.yaml
# Step 2: Wait for the model to be ready
seldon model status iris -w ModelAvailable
# Step 3: Send an inference request
seldon model infer iris '{"inputs": [{"name": "predict", "shape": [1, 4], "datatype": "FP32", "data": [[5.1, 3.5, 1.4, 0.2]]}]}'
Knowledge Sources
- Repository: https://github.com/SeldonIO/seldon-core
- Documentation: https://docs.seldon.io/projects/seldon-core/en/v2/
Related Pages
- SeldonIO_Seldon_core_Seldon_Model_Load implements SeldonIO_Seldon_core_Model_Deployment_Execution
- SeldonIO_Seldon_core_Seldon_Model_CRD is consumed by SeldonIO_Seldon_core_Seldon_Model_Load
- SeldonIO_Seldon_core_Seldon_Model_Status follows SeldonIO_Seldon_core_Seldon_Model_Load
- SeldonIO_Seldon_core_Seldon_Model_Infer depends on SeldonIO_Seldon_core_Seldon_Model_Load
- Environment:SeldonIO_Seldon_core_Kubernetes_Cluster_Environment
- Environment:SeldonIO_Seldon_core_Docker_Compose_Local_Environment
- Heuristic:SeldonIO_Seldon_core_Over_Commit_Memory_Tip
- Heuristic:SeldonIO_Seldon_core_Model_Scheduling_Preference_Tip
- Heuristic:SeldonIO_Seldon_core_Model_Load_Timeout_Tip
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment