Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Microsoft Semantic kernel Kernel Setup And Chat Completion

From Leeroopedia
Knowledge Sources
Domains AI_Orchestration, LLMs, Chat_Completion
Last Updated 2026-02-11 18:00 GMT

Overview

End-to-end process for initializing a Semantic Kernel instance, configuring an AI chat completion service, and executing prompts with templating, streaming, and execution settings.

Description

This workflow covers the foundational setup of the Semantic Kernel framework in a .NET application. It begins with creating a Kernel instance using the builder pattern, registering an AI chat completion service (such as OpenAI or Azure OpenAI), and invoking prompts. The workflow demonstrates four modes of interaction: simple prompt invocation, templated prompts with variable substitution, streaming responses for long outputs, and fine-tuned execution settings (temperature, max tokens, response format). This is the prerequisite workflow for all other Semantic Kernel operations.

Usage

Execute this workflow when starting a new project that uses Semantic Kernel for AI capabilities. This is the first step in any Semantic Kernel application: you need a configured Kernel before you can use plugins, agents, processes, or vector stores. Use this when you have access to an AI service endpoint (OpenAI API key or Azure OpenAI deployment) and need to invoke language model completions from .NET code.

Execution Steps

Step 1: Create Kernel Builder

Initialize a Kernel builder using the static factory method. The builder provides a fluent API for configuring services, plugins, and other dependencies before constructing the immutable Kernel instance.

Key considerations:

  • The builder pattern allows incremental configuration
  • Multiple AI services can be registered on the same kernel
  • The builder integrates with .NET dependency injection

Step 2: Register AI Chat Completion Service

Add a chat completion service to the kernel builder by specifying the AI provider, model identifier, and authentication credentials. Semantic Kernel supports multiple providers including OpenAI, Azure OpenAI, Google Gemini, HuggingFace, Mistral, and Ollama.

Key considerations:

  • Choose the appropriate connector method for your provider
  • Model ID and API key are required parameters
  • Azure OpenAI additionally requires an endpoint URL
  • Multiple providers can coexist on the same kernel

Step 3: Build the Kernel

Compile the builder configuration into an immutable Kernel instance. The kernel is the central orchestration object that routes prompts to registered AI services and manages plugin invocations.

Key considerations:

  • The Build operation is terminal; further configuration requires a new builder
  • The resulting Kernel is thread-safe for concurrent use

Step 4: Invoke a Simple Prompt

Send a natural language prompt to the registered chat completion service and receive a text response. This is the most basic interaction pattern: a single prompt in, a single completion out.

Pseudocode:

result = kernel.InvokePromptAsync("What color is the sky?")

Step 5: Invoke a Templated Prompt

Use Semantic Kernel's template syntax to create parameterized prompts. Variables are injected at runtime through a KernelArguments dictionary, enabling prompt reuse with different inputs.

Pseudocode:

arguments = new KernelArguments { { "topic", "sea" } }
result = kernel.InvokePromptAsync("Tell me a joke about Template:$topic", arguments)

Key considerations:

  • Template variables use the Template:$variableName syntax
  • KernelArguments is a dictionary-like container
  • Multiple template engines are supported (default, Handlebars, Liquid)

Step 6: Stream a Response

For long or incremental outputs, use streaming invocation to receive response chunks as they are generated. This enables real-time display of AI output in user interfaces.

Key considerations:

  • Streaming returns an async enumerable of content chunks
  • Each chunk contains a partial text segment
  • Useful for chat interfaces and real-time feedback

Step 7: Configure Execution Settings

Fine-tune the AI service behavior by specifying execution settings such as maximum token count, temperature (creativity), and response format (plain text or structured JSON).

Key considerations:

  • MaxTokens limits the response length
  • Temperature controls randomness (0.0 = deterministic, 2.0 = highly creative)
  • ResponseFormat can request structured JSON output
  • Settings are passed via KernelArguments with a PromptExecutionSettings object

Execution Diagram

GitHub URL

Workflow Repository