Implementation:Zai org CogVideo SAT Read From CLI File
Appearance
| Attribute | Value |
|---|---|
| Implementation Name | SAT Read From CLI File |
| Workflow | SAT Video Generation |
| Step | 3 of 5 |
| Type | API Doc |
| Source File | sat/sample_video.py:L23-41
|
| Repository | zai-org/CogVideo |
| Last Updated | 2026-02-10 00:00 GMT |
Overview
Implementation of two prompt input functions for the SAT video generation pipeline: read_from_cli for interactive single-prompt input and read_from_file for batch file-based input with distributed worker sharding.
Description
The module provides two generator functions:
read_from_cli(): Uses Python's built-ininput()to interactively collect prompts from the user. Yields(text, count)tuples with an incrementing counter.read_from_file(p, rank, world_size): Reads lines from a text file and yields prompts assigned to the current worker based on modular distribution. Line indexiis processed by workeri mod world_size == rank.
Both functions return Python generators that yield (text, count) tuples, enabling lazy evaluation and memory-efficient processing of large prompt files.
Usage
# Interactive mode
for text, count in read_from_cli():
generate_video(text, count)
# Batch mode (distributed)
for text, count in read_from_file("prompts.txt", rank=0, world_size=4):
generate_video(text, count)
Code Reference
Source Location
| File | Lines | Description |
|---|---|---|
sat/sample_video.py |
L23-41 | read_from_cli and read_from_file functions
|
Signature
def read_from_cli() -> Generator[Tuple[str, int], None, None]:
"""Interactive prompt input. Yields (text, count) tuples."""
def read_from_file(p: str, rank: int = 0, world_size: int = 1) -> Generator[Tuple[str, int], None, None]:
"""File-based prompt input with distributed sharding."""
Import
from sample_video import read_from_cli, read_from_file
I/O Contract
Inputs
read_from_cli
| Parameter | Type | Default | Description |
|---|---|---|---|
| (none) | -- | -- | Reads from standard input interactively |
read_from_file
| Parameter | Type | Default | Description |
|---|---|---|---|
p |
str |
Required | Path to the prompt text file |
rank |
int |
0 |
Current worker rank for distributed sharding |
world_size |
int |
1 |
Total number of workers for distributed sharding |
Outputs
| Output | Type | Description |
|---|---|---|
| Yielded tuples | Generator[Tuple[str, int], None, None] |
Each yield produces (text, count) where text is the prompt string and count is the sequential index
|
Usage Examples
Example 1: Interactive CLI input
from sample_video import read_from_cli
for text, count in read_from_cli():
print(f"Generating video {count} for prompt: {text}")
# User types: "A cat playing piano"
# Yields: ("A cat playing piano", 0)
Example 2: File-based batch input
from sample_video import read_from_file
# prompts.txt contains one prompt per line
for text, count in read_from_file("prompts.txt"):
print(f"Generating video {count} for prompt: {text}")
Example 3: Distributed file input (4 GPUs)
from sample_video import read_from_file
# Worker 0 of 4 processes lines 0, 4, 8, ...
for text, count in read_from_file("prompts.txt", rank=0, world_size=4):
generate_video(text, count)
Example 4: Image-to-video prompt format
# In prompts.txt for I2V:
# A cat playing piano@@/data/images/cat.jpg
# The "@@" separator splits text and image path
Related Pages
- Principle:Zai_org_CogVideo_SAT_Prompt_Input -- Principle governing prompt input modes
- Environment:Zai_org_CogVideo_SAT_Framework_Environment
- Zai_org_CogVideo_SAT_Get_Model_Load_Checkpoint -- Previous step: model loading
- Zai_org_CogVideo_SAT_Diffusion_Sample -- Next step: sampling with the prompt text
- Zai_org_CogVideo_SAT_Inference_Get_Args -- Configuration that selects CLI vs file mode
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment