Environment:Gretelai Gretel synthetics PyTorch CUDA Environment
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deep_Learning, Tabular_Data, Time_Series |
| Last Updated | 2026-02-14 19:00 GMT |
Overview
Python 3.9+ environment with PyTorch >= 1.13 and optional CUDA GPU acceleration for ACTGAN tabular synthesis and DGAN time series generation.
Description
This environment provides the PyTorch runtime context required for the ACTGAN and DGAN synthesis models. It supports automatic CUDA device selection with CPU fallback. The ACTGAN model uses PyTorch for adversarial training of tabular data generators, while the DGAN model uses PyTorch with LSTM-based generators for time series synthesis. Mixed precision training via `torch.cuda.amp` is supported for DGAN to reduce memory costs.
Usage
Use this environment for any ACTGAN Tabular Synthesis or DGAN Timeseries Generation workflow. It is the mandatory prerequisite for running the ACTGAN_Init, ACTGAN_Fit, ACTGANSynthesizer_Actual_Fit, ACTGANSynthesizer_Sample, ACTGAN_Save_Load, DGANConfig, DGAN_Train_Numpy, DGAN_Build, DGAN_Train_Loop, and DGAN_Generate_Numpy implementations.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux, macOS, Windows | All platforms supported per setup.py classifiers |
| Python | >= 3.9 | Specified in setup.py `python_requires` |
| Hardware | NVIDIA GPU (recommended) | CPU fallback automatic; CUDA checked via `torch.cuda.is_available()` |
| CUDA | Compatible with PyTorch >= 1.13 | Typically CUDA 11.6+ for PyTorch 1.13 |
Dependencies
Python Packages
- `torch` >= 1.13
- `numpy` >= 1.18.0, < 1.24
- `pandas` >= 1.1.0, < 2
- `packaging` < 22.0
- `rdt` >= 1.2, < 1.3
- `sdv` >= 0.17, < 0.18
- `category-encoders` == 2.2.2
- `joblib` == 1.4.2
- `smart_open` >= 2.1.0, < 6.0
- `tqdm` < 5.0
Credentials
No API keys or credentials are required. All data is loaded from local files or in-memory DataFrames.
Quick Install
# Install with PyTorch extras
pip install gretel-synthetics[torch]
# Or install all extras
pip install gretel-synthetics[all]
# For CUDA GPU support, ensure PyTorch is installed with CUDA
# pip install torch --index-url https://download.pytorch.org/whl/cu118
Code Evidence
Device selection utility from `utils/torch_utils.py:13-18`:
def determine_device() -> str:
"""Returns device on which generation should run."""
device = "cuda" if torch.cuda.is_available() else "cpu"
return device
ACTGAN CUDA device selection with string override from `actgan/actgan.py:342-349`:
if not cuda or not torch.cuda.is_available():
device = "cpu"
elif isinstance(cuda, str):
device = cuda
else:
device = "cuda"
self._device = torch.device(device)
DGAN CUDA device selection from `timeseries_dgan/dgan.py:684-687`:
if self.config.cuda and torch.cuda.is_available():
self.device = "cuda"
else:
self.device = "cpu"
PyTorch version check for Gumbel softmax from `actgan/actgan.py:363-367`:
_gumbel_softmax = staticmethod(
functional.gumbel_softmax
if version.parse(torch.__version__) >= version.parse("1.2.0")
else _gumbel_softmax_stabilized
)
DGAN mixed precision training from `timeseries_dgan/dgan.py:840-850`:
scaler = torch.cuda.amp.GradScaler(enabled=self.config.mixed_precision_training)
# ...
with torch.cuda.amp.autocast(enabled=self.config.mixed_precision_training):
Cross-device model loading from `actgan/base.py:67-80`:
# Save: move to CPU first
device_backup = self._device
self.set_device(torch.device("cpu"))
torch.save(self, path)
self.set_device(device_backup)
# Load: detect available device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torch.load(path)
model.set_device(device)
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `ValueError: gumbel_softmax returning NaN` | Numerical instability in Gumbel-Softmax with older PyTorch | Upgrade PyTorch >= 1.2.0 for the stable native implementation |
| `batch_size must be divisible by 2` | ACTGAN batch_size is odd | Set batch_size to an even number (default: 500) |
| `batch_size must be divisible by pac (defaults to 10)` | ACTGAN batch_size not divisible by pac parameter | Use batch_size that is divisible by pac (default pac=10) |
| `max_sequence_len must be divisible by sample_len` | DGAN sequence/sample length mismatch | Ensure max_sequence_len is an exact multiple of sample_len |
Compatibility Notes
- PyTorch < 1.2.0: Falls back to a stabilized Gumbel-Softmax implementation that retries up to 10 times on NaN values.
- CPU-only: Both ACTGAN and DGAN fully support CPU-only mode. Set `cuda=False` or ensure no GPU is available.
- CUDA device strings: ACTGAN accepts CUDA device strings like `"cuda:1"` for multi-GPU selection.
- Model portability: Models are saved on CPU for cross-device compatibility. Loading auto-detects the available device.
- Mixed precision: DGAN supports mixed precision training (`mixed_precision_training=True`) for reduced VRAM usage, but it is disabled by default.
- Non-blocking transfers: DGAN uses `non_blocking=True` for GPU tensor transfers to overlap computation and data movement.
Related Pages
- Implementation:Gretelai_Gretel_synthetics_ACTGAN_Init
- Implementation:Gretelai_Gretel_synthetics_ACTGAN_Fit
- Implementation:Gretelai_Gretel_synthetics_ACTGANSynthesizer_Actual_Fit
- Implementation:Gretelai_Gretel_synthetics_ACTGANSynthesizer_Sample
- Implementation:Gretelai_Gretel_synthetics_ACTGAN_Save_Load
- Implementation:Gretelai_Gretel_synthetics_DGANConfig
- Implementation:Gretelai_Gretel_synthetics_DGAN_Train_Numpy
- Implementation:Gretelai_Gretel_synthetics_DGAN_Build
- Implementation:Gretelai_Gretel_synthetics_DGAN_Train_Loop
- Implementation:Gretelai_Gretel_synthetics_DGAN_Generate_Numpy