Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Microsoft Semantic kernel Prompt With Function Calling

From Leeroopedia
Knowledge Sources
Domains AI_Orchestration, Function_Calling, Prompt_Engineering
Last Updated 2026-02-11 19:00 GMT

Overview

Combining natural language prompts with automatic tool calling enables AI models to select and invoke registered functions as part of generating a response.

Description

Prompt With Function Calling is the principle of unifying prompt execution and tool use into a single invocation. Rather than treating prompts and function calls as separate operations, this principle allows a developer to send a natural language prompt to the AI model while simultaneously making registered functions available for the model to call. The model can then autonomously decide whether it needs to call one or more functions to gather information or perform actions before generating its final response.

This principle represents the convergence of two previously separate paradigms in AI orchestration:

  • Prompt execution: Sending a natural language prompt to a model and receiving a text response.
  • Function calling (tool use): Allowing the model to request the execution of specific functions and receive their results.

In Semantic Kernel, these are unified through the InvokePromptAsync method combined with KernelArguments that carry PromptExecutionSettings containing a FunctionChoiceBehavior. When the model determines that it needs additional information to answer the prompt, it generates a function call request. The framework intercepts this request, executes the function, and feeds the result back to the model, which continues its reasoning. This loop may repeat multiple times until the model has gathered enough information to produce a final text response.

The flow for a single prompt with function calling typically follows this sequence:

  1. Developer calls InvokePromptAsync with a prompt string and execution settings that include FunctionChoiceBehavior.Auto().
  2. The framework sends the prompt and the registered function schemas to the AI model.
  3. The model may respond with a function call request (e.g., "call TimeInformation.GetCurrentUtcTime").
  4. The framework executes the function and appends the result to the conversation history.
  5. The framework sends the updated conversation back to the model.
  6. Steps 3-5 repeat until the model produces a final text response.
  7. The final text response is returned as a FunctionResult.

This approach is fundamentally different from templated prompts (e.g., Template:TimeInformation.GetCurrentUtcTime), where functions are called deterministically during prompt rendering. With function calling, the model decides which functions to call based on the prompt context, making the system more flexible and capable of handling open-ended queries.

Usage

Use prompt with function calling when the AI model should have autonomy in deciding what tools to use. This is the standard approach for building conversational AI agents, chatbots with tool access, and any scenario where the required function calls cannot be predetermined from the prompt template alone. It is especially powerful for multi-step reasoning tasks where the model needs to gather information from multiple sources before formulating a response.

Theoretical Basis

Prompt With Function Calling implements a ReAct (Reasoning and Acting) loop where the AI model alternates between reasoning about what information it needs and taking action (calling functions) to obtain that information.

Formal model:

Given a prompt P, a set of available functions F, and an AI model M:

InvokePromptWithFunctionCalling(P, F, M):
  conversation = [UserMessage(P)]
  tools = SchemaOf(F)

  loop:
    response = M(conversation, tools)
    if response is TextResponse:
      return response
    if response is FunctionCallRequest(fn, args):
      result = Execute(fn, args)
      conversation.append(ToolResult(fn, result))
      goto loop

The key distinction from template-based function calling:

Template-based:  Render("Today is {{Time.GetDate}}. What day is it?")
                 -> deterministic function call at render time
                 -> prompt sent: "Today is 2026-02-11. What day is it?"

Function-calling: InvokePrompt("What day is it?", FunctionChoiceBehavior.Auto())
                  -> model receives prompt + tool schemas
                  -> model decides: "I need to call Time.GetDate"
                  -> framework calls function, feeds result back
                  -> model produces final answer

Key invariants:

  • Model autonomy: The model, not the developer, decides which functions to call. The developer controls the available functions but not the chosen functions.
  • Execution settings travel with the request: The FunctionChoiceBehavior is set per-invocation via KernelArguments, allowing different prompts to have different tool-use policies.
  • Convergence: The loop terminates when the model produces a text response. Well-behaved models converge after a finite number of function calls.
  • Composability: The same InvokePromptAsync method works with or without function calling; the presence or absence of FunctionChoiceBehavior in the settings determines whether tool use is enabled.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment