Implementation:Huggingface Optimum TasksManager Infer Task
| Field | Value |
|---|---|
| Page Type | Implementation |
| Source Repository | https://github.com/huggingface/optimum |
| Source File | optimum/exporters/tasks.py
|
| Domains | NLP, Computer_Vision, Export |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
This implementation provides the concrete APIs for automatically inferring the ML task from a model identifier and for loading the correct model class given a task string. These are the two primary entry points in the TasksManager class that power the Task and Model Resolution principle.
API Reference
TasksManager.infer_task_from_model
Source location: optimum/exporters/tasks.py, lines 798-851
Purpose: Takes a model (as a string identifier, a PreTrainedModel instance, or a model class type) and returns the inferred task string (e.g., "text-classification", "text-generation").
Signature:
@classmethod
def infer_task_from_model(
cls,
model: Union[str, "PreTrainedModel", "DiffusionPipeline", Type],
subfolder: str = "",
revision: Optional[str] = None,
cache_dir: str = HUGGINGFACE_HUB_CACHE,
token: Optional[Union[bool, str]] = None,
library_name: Optional[str] = None,
) -> str:
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
Union[str, PreTrainedModel, DiffusionPipeline, Type] |
required | The model to infer the task from. Can be a Hub repo ID, a model instance, or a model class. |
subfolder |
str |
"" |
Subfolder within the model directory on the Hub. |
revision |
Optional[str] |
None |
Specific model version (branch, tag, or commit ID). |
cache_dir |
str |
HUGGINGFACE_HUB_CACHE |
Path to cached pretrained model weights. |
token |
Optional[Union[bool, str]] |
None |
Authentication token for private Hub repos. |
library_name |
Optional[str] |
None |
Library name ("transformers", "timm", "diffusers", "sentence_transformers").
|
Returns: str -- The task name automatically detected from the model.
Raises: ValueError if the task name could not be automatically inferred.
Resolution strategy:
- If
modelis a string, delegates to_infer_task_from_model_name_or_pathwhich queries Hub metadata and config - If
modelis a type (class), delegates to_infer_task_from_model_or_model_classwith the class - If
modelis an instance, delegates to_infer_task_from_model_or_model_classwith the instance
TasksManager.get_model_from_task
Source location: optimum/exporters/tasks.py, lines 1074-1222
Purpose: Takes a task string and a model path, resolves the framework and library, looks up the correct model class, and returns a loaded PreTrainedModel (or DiffusionPipeline) instance.
Signature:
@staticmethod
def get_model_from_task(
task: str,
model_name_or_path: Union[str, Path],
subfolder: str = "",
revision: Optional[str] = None,
cache_dir: str = HUGGINGFACE_HUB_CACHE,
token: Optional[Union[bool, str]] = None,
framework: Optional[str] = None,
torch_dtype: Optional["torch.dtype"] = None,
device: Optional[Union["torch.device", str]] = None,
library_name: Optional[str] = None,
**model_kwargs,
) -> Union["PreTrainedModel", "DiffusionPipeline"]:
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
task |
str |
required | The task required (e.g., "text-classification", "auto").
|
model_name_or_path |
Union[str, Path] |
required | Hub model ID or local directory path. |
subfolder |
str |
"" |
Subfolder within the model directory. |
revision |
Optional[str] |
None |
Specific model version. |
cache_dir |
str |
HUGGINGFACE_HUB_CACHE |
Path to cached model weights. |
token |
Optional[Union[bool, str]] |
None |
Authentication token. |
framework |
Optional[str] |
None |
Framework to use ("pt" or "tf"). Auto-detected if not specified.
|
torch_dtype |
Optional[torch.dtype] |
None |
Data type for loading model weights (PyTorch only). |
device |
Optional[Union[torch.device, str]] |
None |
Device to load model on (PyTorch only, defaults to "cpu").
|
library_name |
Optional[str] |
None |
Library name for model loading. |
**model_kwargs |
Dict[str, Any] |
Additional keyword arguments passed to .from_pretrained().
|
Returns: Union[PreTrainedModel, DiffusionPipeline] -- The loaded model instance.
Internal logic:
- If
frameworkisNone, callsTasksManager.determine_frameworkto auto-detect - If
library_nameisNone, callsTasksManager.infer_library_from_model - If
taskis"auto", callsTasksManager.infer_task_from_model - Resolves the model class via
TasksManager.get_model_class_for_task - Loads the model using
model_class.from_pretrained() - Calls
TasksManager.standardize_model_attributeson the loaded model before returning
Import
from optimum.exporters import TasksManager
Input/Output Summary
| API | Input | Output |
|---|---|---|
infer_task_from_model |
Model name (str), model instance, or model class | Task string (e.g., "text-classification")
|
get_model_from_task |
Task string + model path | Loaded PreTrainedModel or DiffusionPipeline instance
|
Usage Example
from optimum.exporters import TasksManager
# Infer task from a Hub model
task = TasksManager.infer_task_from_model("bert-base-uncased")
# Returns: "fill-mask"
# Load the correct model for a given task
model = TasksManager.get_model_from_task(
task="text-classification",
model_name_or_path="distilbert-base-uncased-finetuned-sst-2-english",
)