Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LaurentMazare Tch rs Adam Optimization

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Optimization
Last Updated 2026-02-08 14:00 GMT

Overview

Adaptive gradient descent optimizer that maintains per-parameter learning rates using first and second moment estimates of gradients.

Description

Adam (Adaptive Moment Estimation) combines the benefits of AdaGrad (per-parameter learning rates) and RMSProp (exponential moving average of squared gradients). It maintains two exponential moving averages: the first moment (mean of gradients, controlled by beta1) and the second moment (mean of squared gradients, controlled by beta2). Bias correction compensates for initialization at zero. Adam is the default optimizer for most deep learning tasks due to its robustness to learning rate selection and fast convergence.

Usage

Use Adam as the default optimizer for most training tasks. It works well with default hyperparameters (lr=1e-3, beta1=0.9, beta2=0.999) and requires minimal tuning compared to SGD. Prefer AdamW for tasks requiring weight decay regularization.

Theoretical Basis

mt=β1mt1+(1β1)gt vt=β2vt1+(1β2)gt2 m^t=mt1β1t,v^t=vt1β2t θt=θt1αv^t+ϵm^t

Default hyperparameters: beta1=0.9, beta2=0.999, eps=1e-8

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment