Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:OpenHands OpenHands Runtime Provisioning

From Leeroopedia
Knowledge Sources
Domains Distributed_Systems, Conversation_Management
Last Updated 2026-02-11 21:00 GMT

Overview

Runtime provisioning is the process of creating an isolated, sandboxed execution environment (typically a remote container) for an agent conversation, configured with the appropriate LLM backends, tool registries, and security boundaries.

Description

Each agent conversation requires a dedicated execution environment where the agent can run code, access files, and interact with external services. Runtime provisioning handles the lifecycle of creating these environments on demand. The key challenges it addresses are:

  • Isolation -- Each conversation runs in its own container so that code execution, file system mutations, and network access from one conversation cannot interfere with another.
  • Configuration injection -- The runtime must be configured with the correct LLM provider credentials, agent class, security policies, and resource limits before the agent loop begins.
  • Resource management -- Provisioned runtimes consume cluster resources (CPU, memory, network), so the provisioning layer must track allocations and enforce quotas.
  • Failure recovery -- If provisioning fails partway through (e.g., container image pull timeout), the system must clean up partial resources and surface a clear error.

Usage

Use runtime provisioning whenever:

  • A new conversation is being started and needs a fresh execution sandbox.
  • A conversation is being migrated to a different cluster node and requires a new runtime on that node.
  • The runtime configuration (LLM model, agent type, security policy) has changed and the existing runtime must be replaced.

Theoretical Basis

Runtime provisioning follows the factory pattern combined with dependency injection: a factory method creates the runtime object, and configuration is injected into it before it becomes operational.

Pseudocode:

function provision_runtime(conversation_id, user_id, settings):
    # Step 1: Resolve provider credentials
    provider_handler = resolve_provider(settings, user_id)
    validate_credentials(provider_handler)

    # Step 2: Determine agent and LLM configuration
    agent_class = lookup_agent(settings.agent_name)
    llm_config = build_llm_config(settings, provider_handler)

    # Step 3: Create the container runtime
    runtime = create_remote_container(
        conversation_id=conversation_id,
        image=settings.sandbox_image,
        resource_limits=settings.resource_limits,
    )

    # Step 4: Wait for runtime readiness
    await runtime.wait_until_ready(timeout=PROVISION_TIMEOUT)

    # Step 5: Inject configuration
    runtime.set_llm_config(llm_config)
    runtime.set_agent(agent_class)
    runtime.set_security_policy(settings.security_policy)

    return runtime

Key invariants:

  • One runtime per conversation -- Each conversation ID maps to exactly one runtime. Attempting to provision a second runtime for the same conversation must either fail or replace the existing one.
  • Credential freshness -- Provider tokens must be validated (and refreshed if necessary) before being injected into the runtime, because the runtime may run for hours.
  • Idempotent cleanup -- If any step fails, all previously allocated resources for that provisioning attempt must be released.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment