Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:HKUDS AI Trader LLM Invocation

From Leeroopedia


Knowledge Sources
Domains LLM_Agents, Inference
Last Updated 2026-02-09 14:00 GMT

Overview

An inference pattern that sends conversation messages to a LangChain agent and receives responses including tool call results.

Description

LLM Invocation is the core inference step where conversation messages are sent to the LangChain agent runtime via the Runnable interface's ainvoke() method. The agent processes the messages, optionally invokes MCP tools, and returns a response containing the assistant's reply along with any tool call results. The invocation respects a recursion limit to prevent infinite tool-calling loops.

This is the fundamental building block of the agent's reasoning loop: each invocation may produce a direct response or trigger tool calls that are then fed back as additional context.

Usage

Use this principle within the agent's trading session loop whenever a new message (user query or tool result continuation) needs to be processed by the LLM. Each invocation may result in the agent calling tools, generating analysis, or outputting the finish signal.

Theoretical Basis

# Pseudocode for LLM invocation via LangChain
response = await agent.ainvoke(
    {"messages": conversation_history},
    {"recursion_limit": 100}
)
# response["messages"] contains:
# - AIMessage (assistant's response)
# - ToolMessage (results from any tool calls)

Key properties:

  • Async: Uses ainvoke for non-blocking execution
  • Recursion limit: Prevents infinite tool-calling chains
  • Stateless per call: Each invocation is independent; state is managed by the caller
  • Tool integration: Agent automatically invokes MCP tools when needed

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment