Implementation:Microsoft Semantic kernel InvokePromptAsync
| Knowledge Sources | |
|---|---|
| Domains | AI_Orchestration, Natural_Language_Processing |
| Last Updated | 2026-02-11 19:00 GMT |
Overview
Concrete tool for sending a natural language prompt to an AI service and receiving a complete response provided by the Microsoft Semantic Kernel library.
Description
Kernel.InvokePromptAsync is an extension method that accepts a prompt string, dynamically wraps it in a KernelFunction, executes it against the kernel's registered AI service, and returns the result as a FunctionResult. This is the simplest and most direct way to interact with an AI model through Semantic Kernel.
Internally, the method creates a temporary KernelFunction from the prompt using the specified (or default) template format and prompt template factory. It then invokes this function through the standard kernel execution pipeline, which includes filter execution, prompt rendering, AI service resolution, and result packaging. The method is fully asynchronous and returns a Task<FunctionResult> that resolves when the AI service completes its response.
Usage
Use InvokePromptAsync for straightforward prompt-to-response interactions where you need the complete AI response before proceeding. It is ideal for question-answering, content generation, classification, and any scenario where streaming is not required.
Code Reference
Source Location
- Repository: semantic-kernel
- File:
dotnet/src/SemanticKernel.Core/KernelExtensions.cs:L1238-1251
Signature
public static Task<FunctionResult> InvokePromptAsync(
this Kernel kernel,
string promptTemplate,
KernelArguments? arguments = null,
string? templateFormat = null,
IPromptTemplateFactory? promptTemplateFactory = null,
CancellationToken cancellationToken = default)
Import
using Microsoft.SemanticKernel;
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| kernel | Kernel |
Yes | The kernel instance to invoke the prompt on (implicit via extension method). |
| promptTemplate | string |
Yes | The prompt string to send to the AI service. May contain template variables in Template:$variable syntax. |
| arguments | KernelArguments? |
No | Optional arguments for template variable substitution and execution settings. |
| templateFormat | string? |
No | Optional template format identifier (e.g., "semantic-kernel" for the default Handlebars-like format). Defaults to the Semantic Kernel prompt template format. |
| promptTemplateFactory | IPromptTemplateFactory? |
No | Optional factory for creating the prompt template renderer. Defaults to the built-in KernelPromptTemplateFactory. |
| cancellationToken | CancellationToken |
No | Optional cancellation token for aborting the request. |
Outputs
| Name | Type | Description |
|---|---|---|
| return | Task<FunctionResult> |
An asynchronous task that resolves to a FunctionResult containing the AI response. The result can be converted to a string via ToString() or typed access via GetValue<T>(). |
Usage Examples
Simple Prompt
using Microsoft.SemanticKernel;
Kernel kernel = Kernel.CreateBuilder()
.AddOpenAIChatClient(
modelId: TestConfiguration.OpenAI.ChatModelId,
apiKey: TestConfiguration.OpenAI.ApiKey)
.Build();
// Simple prompt invocation
Console.WriteLine(await kernel.InvokePromptAsync("What color is the sky?"));
Prompt with Cancellation
using Microsoft.SemanticKernel;
var cts = new CancellationTokenSource(TimeSpan.FromSeconds(30));
FunctionResult result = await kernel.InvokePromptAsync(
"Summarize the history of computing.",
cancellationToken: cts.Token);
Console.WriteLine(result);
Accessing Result Metadata
using Microsoft.SemanticKernel;
FunctionResult result = await kernel.InvokePromptAsync("What is 2 + 2?");
// Get the string response
string response = result.ToString();
// Access metadata if available
Console.WriteLine($"Response: {response}");
Console.WriteLine($"Metadata: {result.Metadata}");