Principle:Facebookresearch Habitat lab Policy Network Initialization
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Reinforcement_Learning, Computer_Vision |
| Last Updated | 2026-02-15 02:00 GMT |
Overview
Construction of a visual RL policy network that processes raw sensor observations through a CNN encoder, maintains temporal state via an RNN, and outputs action distributions and value estimates.
Description
Policy Network Initialization creates the neural network architecture for embodied navigation agents. The standard Habitat architecture follows a three-stage design:
- Visual Encoder: A ResNet backbone (typically ResNet-18 or ResNet-50) processes RGB and/or depth images into feature vectors
- State Encoder: A recurrent network (GRU or LSTM) integrates visual features with previous actions and GPS/compass sensors over time
- Action/Value Heads: Linear layers map the hidden state to an action distribution (policy head) and scalar value estimate (critic head)
This architecture was introduced in the DD-PPO paper (Wijmans et al., 2019) and has become the standard baseline for embodied navigation tasks.
Usage
Use this principle when creating an RL agent for visual navigation tasks. The policy is instantiated from config during the training initialization phase, before any rollout collection begins.
Theoretical Basis
The policy follows the actor-critic framework:
Where the hidden state is computed recurrently:
Architecture pseudo-code:
# Abstract policy architecture
visual_features = ResNetEncoder(rgb_observation, depth_observation)
combined = concatenate(visual_features, gps, compass, prev_action)
hidden_state = RNN(combined, prev_hidden_state)
action_distribution = PolicyHead(hidden_state)
value_estimate = ValueHead(hidden_state)