Principle:Mage ai Mage ai Batch Export
| Knowledge Sources | |
|---|---|
| Domains | Data_Integration, ETL, Batch_Processing |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
An abstract batch export mechanism that provides the extension point for destination connectors to write accumulated records to their target storage system.
Description
Batch Export is the core abstraction for destination data loading. After records are validated and accumulated by the ingestion loop, they are passed as a batch to the export layer. The Mage framework defines this as an abstract method that each destination connector must implement with target-specific logic: INSERT/UPDATE/UPSERT operations for databases, API calls for cloud services, or file writes for storage targets. Error handling is per-stream, allowing successful streams to commit even if others fail.
Usage
Every destination connector must implement the batch export interface. This is where the actual data loading logic lives: SQL INSERT statements, API POST requests, file writes, etc.
Theoretical Basis
The batch export contract:
- Receives a list of validated record dicts with schema and stream metadata
- Writes records to the target system using connector-specific logic
- Handles errors per-stream to allow partial success across multiple streams
- May perform deduplication based on key_properties
- May use unique_conflict_method (UPDATE) for upsert behavior
The framework calls export_batch_data via process_record_data, which wraps it with logging and error handling.