Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Eventual Inc Daft Data Ingestion Parquet

From Leeroopedia


Knowledge Sources
Domains Data_Engineering, Analytics
Last Updated 2026-02-08 00:00 GMT

Overview

Technique for creating lazy DataFrame scans from Apache Parquet columnar files.

Description

Apache Parquet is a columnar storage format optimized for analytical workloads. Daft reads Parquet files lazily, meaning that no data is read until an action is triggered. This lazy scanning enables critical optimizations:

  • Predicate pushdown: Filters are pushed down to the scan level so only matching row groups are read from disk or remote storage.
  • Projection pushdown: Only the columns referenced in downstream operations are loaded, dramatically reducing I/O for wide tables.
  • Schema inference: The Parquet file metadata is used to automatically infer column names and data types without reading the actual data.

Daft supports reading Parquet files from local paths, S3, GCS, and Azure Blob Storage using glob patterns (e.g., s3://bucket/path/*.parquet). It also supports advanced features like hive-style partitioning, custom schemas, row group selection, and Int96 timestamp coercion.

For distributed execution on Ray, Daft automatically adjusts I/O concurrency by using single-threaded I/O per worker to avoid overwhelming storage backends with too many concurrent connections.

Usage

Use this technique when loading structured analytical data from Parquet files for distributed processing. Common scenarios include:

  • Reading data lake tables stored in Parquet format
  • Loading partitioned datasets from cloud object stores
  • Ingesting output from other data processing systems (Spark, Pandas, DuckDB, etc.)

Theoretical Basis

Parquet is built on a columnar storage model that stores data column-by-column rather than row-by-row. This design provides several theoretical advantages for analytical queries:

  1. Compression efficiency: Columns of the same type compress better than heterogeneous rows, because similar values cluster together.
  2. I/O minimization: Analytical queries typically read a subset of columns; columnar layout allows skipping unneeded columns entirely.
  3. Predicate pushdown: Row group statistics (min/max values, null counts) stored in Parquet metadata enable skipping entire row groups that cannot match a filter predicate.
  4. Schema evolution: Parquet metadata includes full schema information, allowing readers to handle schema changes gracefully.
Pseudocode:
1. Parse path(s) and resolve glob patterns
2. Read Parquet metadata to infer schema (if infer_schema=True)
3. Construct lazy scan plan with:
   - File format config (row groups, timestamp coercion)
   - Storage config (IO config, multithreading)
4. Apply hive partitioning if enabled
5. Return lazy DataFrame (no data read yet)
6. On action (.collect(), .show()):
   a. Apply projection pushdown (select needed columns)
   b. Apply predicate pushdown (filter row groups)
   c. Read and decode matching data

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment