Heuristic:CrewAIInc CrewAI LLM Provider Message Workarounds
| Knowledge Sources | |
|---|---|
| Domains | LLM_Integration, Debugging |
| Last Updated | 2026-02-11 17:00 GMT |
Overview
Provider-specific message formatting workarounds for Anthropic (requires user-first messages), Mistral (requires user/tool-last messages), and Ollama (no assistant-final messages).
Description
Different LLM providers enforce different constraints on the message array structure that are not always documented in their official API references. CrewAI's LLM layer auto-detects the provider from the model string and applies provider-specific message transformations. These workarounds inject placeholder messages to satisfy each provider's constraints without altering the semantic content of the conversation.
Usage
Apply this heuristic when debugging unexpected LLM API errors related to message formatting, or when adding support for a new LLM provider. If you see errors about message role ordering, check whether the provider has specific constraints that require workaround messages.
The Insight (Rule of Thumb)
- Anthropic: Messages must start with `"user"` role. If the first message is `"system"` or the array is empty, inject `{"role": "user", "content": "."}` at the beginning.
- Mistral: The last message must have role `"user"` or `"tool"`. If the last message is `"assistant"`, append `{"role": "user", "content": "Please continue."}`.
- Ollama: The last message must not be `"assistant"`. If it is, append `{"role": "user", "content": ""}` (empty string).
- Trade-off: These placeholder messages add minimal token overhead but prevent API errors that would otherwise halt execution entirely.
Reasoning
Each LLM provider's API has implicit constraints on message ordering that differ from OpenAI's more permissive API. Anthropic requires a user message before any system message for conversation structure. Mistral requires conversations to end with a user or tool turn (likely for its instruction-following architecture). Ollama's local inference engine similarly rejects assistant-final messages. These constraints were discovered empirically during integration and are not always prominently documented. The Ollama fix includes a TODO comment referencing an upstream LiteLLM PR (#10917) that would make the workaround unnecessary once merged.
Code Evidence
All workarounds from `lib/crewai/src/crewai/llm.py:2082-2107`:
# Handle Mistral models - they require the last message to have a role of 'user' or 'tool'
if "mistral" in self.model.lower():
if messages and messages[-1]["role"] == "assistant":
return [*messages, {"role": "user", "content": "Please continue."}]
return messages
# TODO: Remove this code after merging PR https://github.com/BerriAI/litellm/pull/10917
# Ollama doesn't supports last message to be 'assistant'
if (
"ollama" in self.model.lower()
and messages
and messages[-1]["role"] == "assistant"
):
return [*messages, {"role": "user", "content": ""}]
# Handle Anthropic models
if not self.is_anthropic:
return messages
# Anthropic requires messages to start with 'user' role
if not messages or messages[0]["role"] == "system":
return [{"role": "user", "content": "."}, *messages]
Anthropic provider detection from `lib/crewai/src/crewai/llm.py:187`:
ANTHROPIC_PREFIXES: Final[tuple[str, str, str]] = ("anthropic/", "claude-", "claude/")