Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:BerriAI Litellm Model Selection

From Leeroopedia
Knowledge Sources BerriAI/litellm repository
Domains LLM Integration, Provider Resolution, Routing
Last Updated 2026-02-15

Overview

Model selection is the process of resolving a human-readable model identifier into the specific provider, endpoint, and credentials needed to service a request.

Description

In a multi-provider LLM ecosystem, a single model name string must encode enough information to determine which provider handles the request, what the provider-native model identifier is, and which API key and base URL to use. Model selection is the principle of parsing a unified model notation (such as "azure/gpt-4" or "anthropic/claude-3-opus-20240229") and resolving it to a concrete provider routing tuple.

This principle addresses the fundamental challenge of provider multiplexing: a caller should be able to switch between OpenAI, Anthropic, Azure, Bedrock, Vertex AI, or any other provider simply by changing the model string, without altering any other part of their code.

Usage

Apply model selection logic whenever:

  • A model string uses a provider/model compound notation.
  • The system must determine the correct provider handler from an ambiguous model name.
  • Custom or self-hosted endpoints must be routed based on an api_base URL.
  • The router or load balancer needs to resolve model aliases to concrete provider identifiers.

Theoretical Basis

Model selection is a form of name resolution analogous to DNS or service discovery. The core algorithm follows a priority chain:

1. Explicit Provider Override

If a caller supplies a custom_llm_provider, this takes highest precedence and bypasses all parsing.

2. Slash-Delimited Parsing

The model string is split on the first / character. The prefix is tested against the known provider registry.

# Pseudocode: slash-based provider resolution
function resolve_provider(model_string, provider_registry):
    if "/" in model_string:
        prefix, model_name = split(model_string, "/", max=1)
        if prefix in provider_registry:
            return (model_name, prefix)
    return (model_string, None)

3. Known-Endpoint Matching

If an api_base URL is provided, it is compared against a list of known provider endpoints. A match overrides any prefix-based resolution.

# Pseudocode: endpoint-based resolution
function resolve_by_endpoint(api_base, endpoint_registry):
    for endpoint in endpoint_registry:
        if endpoint is substring of api_base:
            return endpoint_registry[endpoint].provider
    return None

4. Model-List Lookup

If no provider is determined, the model string is searched against registered model lists for each provider (e.g., known OpenAI models, known Anthropic models, known Cohere models).

5. Fallback

If no resolution succeeds, the system raises an error indicating the model cannot be mapped to any known provider.

Key Property: The resolution is deterministic and idempotent -- the same inputs always yield the same provider tuple, enabling caching and predictable routing.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment