Implementation:Kubeflow Kubeflow Notebook CRD Creation
| Knowledge Sources | |
|---|---|
| Domains | MLOps, Experimentation, Kubernetes |
| Last Updated | 2026-02-13 00:00 GMT |
Overview
Concrete tool for provisioning interactive notebook environments provided by the Kubeflow Notebooks component.
Description
The Notebook CRD (Custom Resource Definition) is the Kubernetes-native mechanism through which Kubeflow manages interactive computing environments. When a user creates a Notebook resource, the Notebook Controller provisions a StatefulSet running the selected IDE image (Jupyter, RStudio, or VS Code Server) within the user's namespace. The resulting pod has access to namespace-scoped resources including persistent volumes, secrets, and GPU allocations.
Notebook CRD creation can be performed either through the Kubeflow Central Dashboard UI (which provides a guided form) or directly via kubectl apply for programmatic or GitOps-driven workflows. The CRD encapsulates the full pod specification including container image, resource requests and limits, volume mounts, and environment variables.
External Reference
Usage
Use Notebook CRD creation when:
- A data scientist needs a managed Jupyter, RStudio, or VS Code environment on the Kubeflow platform.
- GPU-accelerated interactive work is required and must be scheduled through Kubernetes.
- The team requires namespace-isolated environments with persistent storage for ongoing experimentation.
- Environment provisioning must be automated or version-controlled as part of infrastructure-as-code practices.
Code Reference
Source Location
- Repository: kubeflow/notebooks
- File: config/crd/bases/kubeflow.org_notebooks.yaml (CRD schema)
Signature
apiVersion: kubeflow.org/v1
kind: Notebook
metadata:
name: <notebook-name>
namespace: <user-namespace>
spec:
template:
spec:
containers:
- name: <notebook-name>
image: <jupyter|rstudio|codeserver-image>
resources:
requests:
cpu: "<cpu-request>"
memory: "<memory-request>"
nvidia.com/gpu: "<gpu-count>"
limits:
cpu: "<cpu-limit>"
memory: "<memory-limit>"
nvidia.com/gpu: "<gpu-count>"
volumeMounts:
- name: workspace
mountPath: /home/jovyan
volumes:
- name: workspace
persistentVolumeClaim:
claimName: <pvc-name>
Import
# Apply Notebook CRD via kubectl
kubectl apply -f notebook.yaml
# Or use the Kubeflow Central Dashboard UI to create a Notebook Server
# Navigate to: Central Dashboard > Notebooks > New Notebook
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| metadata.name | string | Yes | Name of the Notebook resource |
| metadata.namespace | string | Yes | Kubernetes namespace (typically the user Profile namespace) |
| spec.template.spec.containers[].image | string | Yes | Container image for the IDE (e.g., Jupyter, RStudio, Code Server) |
| spec.template.spec.containers[].resources | object | Yes | CPU, memory, and GPU resource requests and limits |
| spec.template.spec.volumes | list | No | Persistent volume claims for workspace and data storage |
| spec.template.spec.containers[].env | list | No | Environment variables for the notebook container |
Outputs
| Name | Type | Description |
|---|---|---|
| Running Notebook Pod | Kubernetes Pod | A StatefulSet-managed pod running the selected IDE |
| Notebook URL | string | Accessible URL routed through Istio gateway for the notebook UI |
| Persistent Workspace | PVC-backed storage | Durable workspace that survives pod restarts |
Usage Examples
Basic Usage
apiVersion: kubeflow.org/v1
kind: Notebook
metadata:
name: my-jupyter-notebook
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-jupyter-notebook
image: kubeflownotebookswg/jupyter-scipy:v1.9.0
resources:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
volumeMounts:
- name: workspace
mountPath: /home/jovyan
volumes:
- name: workspace
persistentVolumeClaim:
claimName: my-notebook-pvc
GPU-Enabled Notebook
apiVersion: kubeflow.org/v1
kind: Notebook
metadata:
name: gpu-notebook
namespace: ml-team
spec:
template:
spec:
containers:
- name: gpu-notebook
image: kubeflownotebookswg/jupyter-pytorch-cuda:v1.9.0
resources:
requests:
cpu: "4"
memory: "16Gi"
nvidia.com/gpu: "1"
limits:
cpu: "8"
memory: "32Gi"
nvidia.com/gpu: "1"
volumeMounts:
- name: workspace
mountPath: /home/jovyan
- name: datasets
mountPath: /data
volumes:
- name: workspace
persistentVolumeClaim:
claimName: gpu-notebook-workspace
- name: datasets
persistentVolumeClaim:
claimName: shared-datasets