Principle:Deepseek ai Janus JanusFlow Model Loading
| Knowledge Sources | |
|---|---|
| Domains | Multimodal_AI, Model_Loading |
| Last Updated | 2026-02-10 09:30 GMT |
Overview
A procedure for loading the JanusFlow multimodal model, its processor, and the external SDXL VAE decoder required for rectified flow image generation.
Description
JanusFlow model loading is distinct from standard Janus loading because it requires three separate model components:
- MultiModalityCausalLM (JanusFlow variant): Contains the LLM backbone, understanding encoder (CLIPVisionTower), and flow generation components (ShallowUViTEncoder/Decoder, linear aligners, RMSNorm)
- VLChatProcessor (JanusFlow variant): Processor with tokenizer and image processor, includes the image_gen_tag property
- AutoencoderKL (SDXL VAE): External Stable Diffusion XL VAE for decoding continuous latents to pixels
The JanusFlow model differs architecturally from standard Janus: instead of VQ-VAE discrete tokenization, it uses continuous latent representations with ShallowUViT encoder/decoder blocks and linear aligners for dimension matching.
Usage
Use this principle at the beginning of any JanusFlow rectified flow generation pipeline. All three components (model, processor, VAE) must be loaded and placed on the same device in bfloat16 precision.
Theoretical Basis
JanusFlow introduces a rectified flow generation paradigm:
- Instead of discrete VQ tokens, it operates in continuous latent space
- The ShallowUViTEncoder encodes noisy latents + timestep into LLM-compatible embeddings
- The ShallowUViTDecoder decodes LLM hidden states back to velocity predictions
- Linear aligners bridge between UViT dimension (768) and LLM dimension (2048)
- The final SDXL VAE converts continuous latents to pixel images