Principle:Dotnet Machinelearning Data Loading
| Knowledge Sources | |
|---|---|
| Domains | Machine Learning, Data Engineering, .NET |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Data loading and splitting are foundational steps in any machine learning pipeline, converting raw files into a lazy, schematized, cursorable data representation and partitioning that data into training and evaluation subsets.
Description
Before any model can be trained, raw data must be ingested into a structured format the framework understands. A lazy data view abstraction provides schematized, columnar access to potentially large datasets without loading everything into memory at once. Data is read on demand through cursors, enabling pipelines that operate on datasets larger than available RAM.
Text loading handles the most common ingestion scenario: reading delimited files (CSV, TSV, or custom separators) and mapping columns to a typed schema. The loader must handle practical concerns such as header rows, quoted fields containing the delimiter character, whitespace trimming, and sparse representations.
Train-test splitting partitions the loaded data into two disjoint subsets. The training set is used to fit model parameters, while the test set is held out to evaluate generalization. A typical default allocates 90% to training and 10% to testing. Stratified or keyed splitting ensures that the distribution of a key column is preserved across both partitions, which is critical when classes are imbalanced.
Usage
Use text loading when the source data resides in flat files. Use train-test splitting immediately after loading to establish an unbiased evaluation protocol. Choose the test fraction based on dataset size: smaller datasets may require larger test fractions (0.2-0.3) for reliable estimates, while very large datasets can use smaller fractions (0.05-0.1).
Theoretical Basis
Lazy evaluation follows the principle of deferred computation. A data view D represents a virtual table; no rows are materialized until a consumer requests them through a cursor:
D = LazyLoad(file, schema)
// No I/O has occurred yet
cursor = D.GetRowCursor(columns)
while cursor.MoveNext():
row = cursor.GetValue(columns) // I/O happens here, one row at a time
This model supports streaming and avoids the memory cost of full materialization.
Train-test splitting is grounded in the bias-variance tradeoff. Evaluating a model on the same data used for training yields an optimistically biased estimate of performance. By reserving a fraction f of data for testing:
n_total = |D|
n_test = floor(f * n_total)
n_train = n_total - n_test
Error_true ≈ Error_test (unbiased if split is random)
Error_train ≤ Error_true (optimistic bias)
Stratified splitting ensures that for a key column K:
P(K=k | train) ≈ P(K=k | test) ≈ P(K=k | D)
This preserves class proportions and yields more stable evaluation metrics, especially for rare classes.