Implementation:Gretelai Gretel synthetics DGANConfig
| Knowledge Sources | |
|---|---|
| Domains | Synthetic_Data, Time_Series, GAN |
| Last Updated | 2026-02-14 19:00 GMT |
Overview
Concrete tool for configuring DGAN model hyperparameters provided by the gretel-synthetics library.
Description
DGANConfig is a Python dataclass that stores all hyperparameters for the DoppelGANger model. It groups parameters into five categories: model structure (sequence length, noise dimensions, layer/unit counts), data transformation (normalization mode, feature scaling, example scaling, binary encoder cutoff), model initialization (forget bias for LSTM), loss function (gradient penalty coefficients and attribute loss coefficient), and training (optimizer learning rates, beta1 values, batch size, epochs, discriminator/generator rounds, CUDA, and mixed precision). The class provides a to_dict() method for serialization that returns a dictionary suitable for re-initializing a new instance.
Usage
Create a DGANConfig instance before constructing a DGAN model. At minimum, supply max_sequence_len and sample_len. The config is passed to the DGAN constructor and controls all subsequent network building, training, and generation behavior.
Code Reference
Source Location
- Repository: gretel-synthetics
- File:
src/gretel_synthetics/timeseries_dgan/config.py - Lines: 27-142
Signature
@dataclass
class DGANConfig:
# Model structure
max_sequence_len: int
sample_len: int
attribute_noise_dim: int = 10
feature_noise_dim: int = 10
attribute_num_layers: int = 3
attribute_num_units: int = 100
feature_num_layers: int = 1
feature_num_units: int = 100
use_attribute_discriminator: bool = True
# Data transformation
normalization: Normalization = Normalization.ZERO_ONE
apply_feature_scaling: bool = True
apply_example_scaling: bool = True
binary_encoder_cutoff: int = 150
# Model initialization
forget_bias: bool = False
# Loss function
gradient_penalty_coef: float = 10.0
attribute_gradient_penalty_coef: float = 10.0
attribute_loss_coef: float = 1.0
# Training
generator_learning_rate: float = 0.001
generator_beta1: float = 0.5
discriminator_learning_rate: float = 0.001
discriminator_beta1: float = 0.5
attribute_discriminator_learning_rate: float = 0.001
attribute_discriminator_beta1: float = 0.5
batch_size: int = 1024
epochs: int = 400
discriminator_rounds: int = 1
generator_rounds: int = 1
cuda: bool = True
mixed_precision_training: bool = False
def to_dict(self):
return asdict(self)
Import
from gretel_synthetics.timeseries_dgan.config import DGANConfig
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| max_sequence_len | int | Yes | Length of time series sequences; all training and generated data will share this length |
| sample_len | int | Yes | Time steps generated per LSTM cell; must be a divisor of max_sequence_len |
| attribute_noise_dim | int | No (default 10) | Length of noise vector for attribute generation |
| feature_noise_dim | int | No (default 10) | Length of noise vector for feature generation |
| attribute_num_layers | int | No (default 3) | Number of layers in attribute generator MLP |
| attribute_num_units | int | No (default 100) | Units per layer in attribute generator MLP |
| feature_num_layers | int | No (default 1) | Number of LSTM layers in feature generator |
| feature_num_units | int | No (default 100) | Units per LSTM layer in feature generator |
| use_attribute_discriminator | bool | No (default True) | Whether to use a separate attribute discriminator |
| normalization | Normalization | No (default ZERO_ONE) | Scaling range for continuous variables: ZERO_ONE or MINUSONE_ONE |
| apply_feature_scaling | bool | No (default True) | Scale continuous variables to normalized range before training |
| apply_example_scaling | bool | No (default True) | Add per-example midpoint/half-range as additional attributes |
| binary_encoder_cutoff | int | No (default 150) | Use binary encoding instead of one-hot for columns exceeding this many unique values |
| forget_bias | bool | No (default False) | Initialize LSTM forget gate biases to 1 (matching TF1 behavior) |
| gradient_penalty_coef | float | No (default 10.0) | Coefficient for WGAN gradient penalty on feature discriminator |
| attribute_gradient_penalty_coef | float | No (default 10.0) | Coefficient for WGAN gradient penalty on attribute discriminator |
| attribute_loss_coef | float | No (default 1.0) | Weight of attribute discriminator loss in generator objective |
| generator_learning_rate | float | No (default 0.001) | Adam learning rate for generator |
| generator_beta1 | float | No (default 0.5) | Adam beta1 for generator |
| discriminator_learning_rate | float | No (default 0.001) | Adam learning rate for feature discriminator |
| discriminator_beta1 | float | No (default 0.5) | Adam beta1 for feature discriminator |
| attribute_discriminator_learning_rate | float | No (default 0.001) | Adam learning rate for attribute discriminator |
| attribute_discriminator_beta1 | float | No (default 0.5) | Adam beta1 for attribute discriminator |
| batch_size | int | No (default 1024) | Number of examples per batch for training and generation |
| epochs | int | No (default 400) | Number of training epochs |
| discriminator_rounds | int | No (default 1) | Discriminator update steps per batch |
| generator_rounds | int | No (default 1) | Generator update steps per batch |
| cuda | bool | No (default True) | Use GPU if available |
| mixed_precision_training | bool | No (default False) | Enable automatic mixed precision to reduce memory usage |
Outputs
| Name | Type | Description |
|---|---|---|
| DGANConfig instance | DGANConfig | Dataclass instance holding all hyperparameters |
| to_dict() | dict | Dictionary representation of all fields, suitable for serialization or re-initialization |
Usage Examples
Basic Example
from gretel_synthetics.timeseries_dgan.config import DGANConfig
config = DGANConfig(
max_sequence_len=20,
sample_len=5,
batch_size=1000,
epochs=10,
)
Advanced Example
from gretel_synthetics.timeseries_dgan.config import DGANConfig, Normalization
config = DGANConfig(
max_sequence_len=100,
sample_len=10,
attribute_noise_dim=16,
feature_noise_dim=16,
feature_num_layers=2,
feature_num_units=256,
normalization=Normalization.MINUSONE_ONE,
apply_example_scaling=True,
gradient_penalty_coef=10.0,
generator_learning_rate=0.0002,
discriminator_learning_rate=0.0002,
batch_size=512,
epochs=200,
cuda=True,
)
# Serialize and restore
config_dict = config.to_dict()
restored_config = DGANConfig(**config_dict)