Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Haifengl Smile Matrix Decomposition

From Leeroopedia


Overview

Matrix Decomposition (also called matrix factorization) is the process of expressing a matrix as a product of simpler, structured matrices. Each decomposition reveals different structural properties of the original matrix and enables different computational applications. The five principal decompositions in Smile are LU, QR, SVD, Eigenvalue (EVD), and Cholesky.

Matrix decomposition is the central stage of the Matrix_Decomposition_Pipeline workflow. It transforms a constructed matrix into a factored form that can then be used for solving linear systems, computing determinants, finding eigenvalues, performing dimensionality reduction, and more.

Theoretical Basis

LU Decomposition

The LU decomposition factors a matrix into a lower triangular matrix and an upper triangular matrix with row pivoting:

PA=LU

where P is a permutation matrix, L is unit lower triangular (ones on diagonal), and U is upper triangular.

Properties:

  • Exists for any square matrix (with pivoting)
  • Computational cost: 23n3 flops for an n×n matrix
  • The determinant is det(A)=det(P)1i=1nuii
  • If uii=0 for some i, the matrix is singular (the factorization still completes, but the matrix cannot be used for solving)

Applications: Solving Ax=b, computing determinants, matrix inversion.

QR Decomposition

The QR decomposition factors a matrix into an orthogonal matrix and an upper triangular matrix:

A=QR

where Qm×n has orthonormal columns (QTQ=I) and Rn×n is upper triangular.

Properties:

  • Always exists for any m×n matrix with mn
  • Computed via Householder reflections (LAPACK dgeqrf): 43n2m23n3 flops
  • Q is stored implicitly as a product of Householder reflectors Q=H1H2Hk where Hi=IτiviviT
  • The explicit Q can be reconstructed via dorgqr

Applications: Least squares problems minAxb2, eigenvalue algorithms (QR algorithm), orthogonalization.

Singular Value Decomposition (SVD)

The SVD factors any m×n matrix into three matrices:

A=UΣVT

where Um×k and Vn×k have orthonormal columns, Σk×k is diagonal with nonnegative entries σ1σ2σk0, and k=min(m,n).

Properties:

  • Always exists for any matrix
  • Compact SVD: for m>n, only the first n columns of U are computed
  • The singular values measure the "importance" of each component
  • The matrix rank equals the number of nonzero singular values
  • Condition number: κ(A)=σ1/σk

The Eckart-Young Theorem: The best rank-r approximation to A in the Frobenius or operator norm is:

Ar=i=1rσiuiviT

Applications: Dimensionality reduction (PCA), pseudoinverse computation, matrix rank determination, data compression, latent semantic analysis.

Eigenvalue Decomposition (EVD)

For a square matrix An×n:

Av=λv

where λ is an eigenvalue and v is the corresponding eigenvector.

Symmetric case: If A=AT, then: A=VΛVT

where V is orthogonal and Λ=diag(λ1,,λn) with all real eigenvalues.

General case: Eigenvalues may be complex. LAPACK computes the real and imaginary parts separately: λi=wr,i+jwi,i. Left eigenvectors satisfy vTA=λvT.

Properties:

  • Symmetric matrices have real eigenvalues and orthogonal eigenvectors
  • Smile uses LAPACK dsyevd for symmetric and dgeev for general matrices
  • dsyevd uses divide-and-conquer, which is faster than dsyev for large matrices

Applications: PCA (eigendecomposition of covariance matrix), spectral clustering, stability analysis, Markov chain analysis.

Cholesky Decomposition

For a symmetric positive definite (SPD) matrix:

A=LLT

where L is lower triangular with positive diagonal elements.

Properties:

  • Exists if and only if A is SPD
  • Computational cost: 13n3 flops -- half the cost of LU
  • If dpotrf returns a nonzero info code, the matrix is not positive definite
  • The determinant is det(A)=(i=1nlii)2

Applications: Efficient solving of SPD systems, Monte Carlo simulation with correlated variables, Kalman filters (unscented transform).

Choosing the Right Decomposition

Decomposition Matrix Requirements Best For Cost
LU Square Solving Ax=b, determinants 23n3
Cholesky Symmetric positive definite Solving SPD systems (2x faster than LU) 13n3
QR mn (overdetermined) Least squares, numerical stability 43n2m
SVD Any Rank, pseudoinverse, dimensionality reduction O(mnmin(m,n))
EVD Square (symmetric preferred) Spectral analysis, PCA O(n3)

Destructive Nature of Decompositions

An important implementation detail: all decompositions in Smile overwrite the input matrix. The caller must make a copy before decomposing if the original matrix is still needed:

// The matrix A will be overwritten by lu()
DenseMatrix A_copy = A.copy();
LU lu = A_copy.lu();
// A_copy now contains the LU factors, not the original data

This design avoids unnecessary memory allocation when the original matrix is no longer needed.

Relationship to the Pipeline

Construction --> Arithmetic --> Decomposition --> Solving --> Result Extraction
                                     ^
                                     |
                              (this principle)

Related

Knowledge Sources

Domains

Linear_Algebra, Numerical_Computing, Matrix_Theory

Categories

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment