Implementation:Protectai Modelscan ModelScan Scan
| Knowledge Sources | |
|---|---|
| Domains | ML_Security, Supply_Chain_Security |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
Concrete tool for scanning serialized ML model files to detect unsafe operations, provided by the ModelScan library.
Description
The ModelScan class is the core orchestrator of the modelscan library. It coordinates the entire scanning pipeline: iterating over model files (including zip archive contents), running the middleware pipeline for format tagging, dispatching each model to all enabled scanners, and aggregating results (issues, errors, skipped files). The scan() method is the primary entry point for both CLI and programmatic usage.
Usage
Import this class when you need to programmatically scan model files for security vulnerabilities. Use it in:
- Python scripts that audit model artifacts
- CI/CD pipeline integrations
- MLOps model validation gates
- Custom security tooling built on top of modelscan
Code Reference
Source Location
- Repository: modelscan
- File: modelscan/modelscan.py
- Lines: L29-354
Signature
class ModelScan:
def __init__(
self,
settings: Dict[str, Any] = DEFAULT_SETTINGS,
) -> None:
"""
Initialize ModelScan with scanner and middleware configuration.
Args:
settings: Configuration dict controlling scanners, middlewares,
unsafe globals, and reporting. Defaults to DEFAULT_SETTINGS.
"""
def scan(
self,
path: Union[str, Path],
) -> Dict[str, Any]:
"""
Scan a file or directory for unsafe operations.
Resets internal state on each call, allowing instance reuse.
Args:
path: File or directory path to scan.
Returns:
Dict with keys: 'summary', 'issues', 'errors'
containing structured scan results.
"""
def generate_report(self) -> Optional[str]:
"""
Generate a report using the configured reporting module.
Must be called after scan().
Returns:
Optional report string (depends on report class).
"""
Import
from modelscan.modelscan import ModelScan
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| settings | Dict[str, Any] | No | Configuration dict (defaults to DEFAULT_SETTINGS). Controls scanners, middlewares, unsafe globals, and reporting module. |
| path | Union[str, Path] | Yes (for scan()) | File or directory path to scan. Resolved to absolute path internally. |
Outputs
| Name | Type | Description |
|---|---|---|
| scan() returns | Dict[str, Any] | Result dict with 'summary' (severity counts, scanned/skipped files, version, timestamp), 'issues' (list of issue JSON dicts), 'errors' (list of error dicts) |
| issues (property) | Issues | Issues object containing all Issue objects found during scan |
| errors (property) | List[ErrorBase] | List of errors encountered during scan |
| scanned (property) | List[str] | List of file paths that were successfully scanned |
| skipped (property) | List[ModelScanSkipped] | List of files that were skipped during scan |
Usage Examples
Basic Programmatic Scan
from modelscan.modelscan import ModelScan
# Initialize with default settings (all scanners enabled)
scanner = ModelScan()
# Scan a single model file
results = scanner.scan("/path/to/model.pkl")
# Check for issues
if results["summary"]["total_issues"] > 0:
print(f"Found {results['summary']['total_issues']} issues!")
for issue in results["issues"]:
print(f" [{issue['severity']}] {issue['description']}")
else:
print("No issues found.")
Scan with Console Report
from modelscan.modelscan import ModelScan
scanner = ModelScan()
scanner.scan("/path/to/models/")
# Generate rich-formatted console report
scanner.generate_report()
Scan a Directory
from modelscan.modelscan import ModelScan
scanner = ModelScan()
# Scans all files recursively in the directory
results = scanner.scan("/path/to/model_directory/")
# Access issues by severity
by_severity = scanner.issues.group_by_severity()
for severity, issues in by_severity.items():
print(f"{severity}: {len(issues)} issues")
Related Pages
Implements Principle
Requires Environment
- Environment:Protectai_Modelscan_Python_Core_Runtime
- Environment:Protectai_Modelscan_TensorFlow_Optional
- Environment:Protectai_Modelscan_H5py_Optional