Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:AUTOMATIC1111 Stable diffusion webui Parse prompt attention

From Leeroopedia


Knowledge Sources
Domains Diffusion Models, Natural Language Processing, Prompt Engineering
Last Updated 2026-02-08 00:00 GMT

Overview

Concrete tool for parsing prompt strings with attention-weighting syntax into structured text-weight pairs, provided by the AUTOMATIC1111 stable-diffusion-webui repository.

Description

The parse_prompt_attention function accepts a raw prompt string containing parenthesized attention tokens and returns a list of [text, weight] pairs. It handles:

  • Round brackets (text) -- increases weight by multiplying by 1.1
  • Round brackets with explicit weight (text:1.5) -- sets weight to the specified float value
  • Square brackets [text] -- decreases weight by multiplying by 1/1.1
  • Nested brackets -- multiplicative stacking of weights
  • Escape sequences \(, \[, \\ -- literal characters
  • BREAK keyword -- inserts a chunk boundary marker with weight -1
  • Merging runs -- adjacent segments with identical weights are merged into a single entry

The function uses the precompiled regex re_attention to tokenize the input, then iterates through matches maintaining bracket stacks for tracking nesting depth and applying weight multiplication.

Usage

This function is called early in the text-to-image pipeline, before tokenization by CLIP, to decompose the user's prompt into weighted segments. It is used by get_learned_conditioning and get_multicond_learned_conditioning in the prompt parser module.

Code Reference

Source Location

Signature

def parse_prompt_attention(text):
    """
    Parses a string with attention tokens and returns a list of pairs: text and its associated weight.
    Accepted tokens are:
      (abc) - increases attention to abc by a multiplier of 1.1
      (abc:3.12) - increases attention to abc by a multiplier of 3.12
      [abc] - decreases attention to abc by a multiplier of 1.1
      \( - literal character '('
      \[ - literal character '['
      \) - literal character ')'
      \] - literal character ']'
      \\ - literal character '\'
      anything else - just text
    """

Import

from modules.prompt_parser import parse_prompt_attention

I/O Contract

Inputs

Name Type Required Description
text str Yes Raw prompt string containing optional attention weighting syntax such as (word:1.3) or [word]

Outputs

Name Type Description
return list[list[str, float]] A list of [text, weight] pairs where each text segment has its computed attention weight. The BREAK keyword produces entries with weight -1.

Usage Examples

Basic Usage

from modules.prompt_parser import parse_prompt_attention

# Simple text with no weighting
result = parse_prompt_attention('normal text')
# Returns: [['normal text', 1.0]]

# Parenthesized emphasis
result = parse_prompt_attention('an (important) word')
# Returns: [['an ', 1.0], ['important', 1.1], [' word', 1.0]]

# Explicit weight specification
result = parse_prompt_attention('a (cat:1.5) on a hill')
# Returns: [['a ', 1.0], ['cat', 1.5], [' on a hill', 1.0]]

# Square bracket de-emphasis
result = parse_prompt_attention('photo of [blurry] landscape')
# Returns: [['photo of ', 1.0], ['blurry', 0.9090909090909091], [' landscape', 1.0]]

# Nested parentheses with mixed syntax
result = parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).')
# Returns:
# [['a ', 1.0],
#  ['house', 1.5730000000000004],
#  [' ', 1.1],
#  ['on', 1.0],
#  [' a ', 1.1],
#  ['hill', 0.55],
#  [', sun, ', 1.1],
#  ['sky', 1.4641000000000006],
#  ['.', 1.1]]

Related Pages

Implements Principle

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment