Understanding and leveraging Semantic Kernel - Building a very simple agent in a few lines of C# code

In this post, we'll set up Semantic Kernel and see it in action through a simple example. Specifically, we'll connect it to Azure OpenAI and try querying it with a few basic prompts.

Our goal here is to demonstrate just how straightforward it is to implement a simple AI agent. The following code snippets aren’t particularly novel—they can be found in almost any Semantic Kernel tutorial. However, they represent the essential steps that every developer should know, especially if they want to stand out in a job interview.

Configuring Visual Studio

We will illustrate our point by implementing a Console application that simulates a simple AI conversational agent, similar to ChatGPT.

  • Create a new solution and add a new Console Application.
  • Add the Semantic Kernel NuGet package to the project.
1dotnet add package Microsoft.SemanticKernel

Configuring Azure

We will rely on Azure OpenAI to query a language model. Azure OpenAI offers built-in models that can be configured and used directly (in the next post, we’ll explore how to leverage open-source solutions like Ollama).

  • Go to the Azure portal, create a new resource (Azure OpenAI) and configure it.

  • Once deployed, this is what the Azure OpenAI home page looks like.

    Information

    Azure OpenAI is now integrated into Azure Foundry. To explore and deploy models, we must now use the Azure Foundry portal.

  • So go to the Azure Foundry portal.

  • We can now easily deploy a new model that will be available for querying. To do so, click on Deploy model and follow the on-screen instructions.

  • In our case, we will choose the gpt-4o model, as it is a highly versatile language model capable of answering a wide range of questions.

Information

In a real-world deployment, this setup would typically be automated using Bicep or Terraform scripts. More importantly, instead of relying on API keys as shown in our example, the Azure resource (such as Azure App Service or AKS) that connects to Azure OpenAI would use managed identities for secure and scalable authentication.

Follow the following link for more details on what are managed identities and how to implement them on Azure.
Securing credentials in Azure-hosted applications using managed identities and Azure Key Vault

Once the deployment is complete, we still need to retrieve the endpoint and API key in order to call our model from the application. These credentials can be found in the properties section of Azure OpenAI.

Enough theory, code now !

As previously mentioned, we’ll see that with just a few lines of code, it’s entirely possible to build an AI agent with chat capabilities.

Information

The code shown here is written in C#, but Semantic Kernel also supports development in Python and Java.

 1using Microsoft.SemanticKernel;
 2using Microsoft.SemanticKernel.ChatCompletion;
 3
 4// Configure Semantic Kernel
 5var deploymentName = "<your_deployment_name>"; // see above
 6var endPoint = "<your_endpoint>"; // see above
 7var apiKey = "<your_apikey>"; // see above
 8
 9// Add Azure OpenAI chat completion service
10var builder = Kernel.CreateBuilder();
11builder.AddAzureOpenAIChatCompletion(deploymentName, endPoint, apiKey);
12var kernel = builder.Build();
13
14// Get the chat completion service
15var chatService = kernel.GetRequiredService<IChatCompletionService>();
16// Initialize chat history
17var history = new ChatHistory();
18history.AddUserMessage("You are a helpful assistant.");
19
20while (true)
21{
22    Console.Write("You: ");
23    var userMessage = Console.ReadLine();
24    if (string.IsNullOrWhiteSpace(userMessage))
25    {
26        break;
27    }
28	
29    history.AddUserMessage(userMessage);
30    var response = await chatService.GetChatMessageContentsAsync(history);
31    Console.WriteLine($"\nBot: {response[0].Content}\n");
32    history.AddMessage(response[0].Role, response[0].Content ?? string.Empty);
33}

A few remarks are in order regarding this snippet:

  • The Kernel object is created and configured in just a few lines, and then serves as the sole intermediary for all operations (hence the name Kernel).
1var builder = Kernel.CreateBuilder();
2builder.AddAzureOpenAIChatCompletion(deploymentName, endPoint, apiKey);
3var kernel = builder.Build();
  • In particular, we use this Kernel object to initialize a chat and configure it with the appropriate parameters.
1var chatService = kernel.GetRequiredService<IChatCompletionService>();
2// ...
3var history = new ChatHistory();
4history.AddUserMessage("You are a helpful assistant.");
5// ...
6var response = await chatService.GetChatMessageContentsAsync(history);
  • Semantic Kernel completely abstracts the connection to Azure OpenAI: we simply provide the credentials, and Semantic Kernel handles the rest. If we were to switch to OpenAI (instead of Azure OpenAI), the code would remain almost identical.
 1// connect to Azure OpenAI
 2var builder = Kernel.CreateBuilder();
 3builder.AddAzureOpenAIChatCompletion(deploymentName, endPoint, apiKey);
 4var kernel = builder.Build();
 5
 6// connect to OpenAI
 7var builder = Kernel.CreateBuilder();
 8builder.AddOpenAIChatCompletion(deploymentName, endPoint, apiKey);
 9var kernel = builder.Build();
10
11
12// The remaining part of the code stays the same.
Information

This ability to shift seamlessly from one provider to another is made possible by the concept of connectors.

More formally, a connector acts as a bridge between Semantic Kernel and external tools or services, and it's responsible for:

  • sending prompts to language models (e.g., OpenAI, Azure OpenAI, Ollama, Hugging Face)
  • receiving and processing responses from those models
  • integrating other capabilities such as calling APIs, running plugins, or performing RAG (see later)

Why it matters
Connectors make Semantic Kernel modular and extensible, allowing us to easily swap one LLM provider for another, use local models or cloud-hosted ones or add external tools or APIs without changing core logic.

In short, connectors in Semantic Kernel enable plug-and-play integration with AI capabilities, helping developers build flexible, scalable, and production-ready AI agents.

Important

Note that in this code, we are implementing a conversational agent—an agent capable of interacting with users and answering questions. However, not all large language models (LLMs) support such interactive features. For more basic LLMs, interactions are limited to simple queries with minimal context. In such cases, Semantic Kernel offers alternative methods like InvokePromptAsync to accommodate these simpler models.

Running the program

Thus, we’ve implemented a simple AI agent in just a few lines of code. However, it's important to note that we relied on predefined resources like Azure OpenAI, which means we’ll need an active Azure subscription to conduct our tests (not to mention the cost associated with using Azure OpenAI).

That’s why, in the next post, we’ll explore how to use Semantic Kernel with local language models using tools like Ollama.

Understanding and leveraging Semantic Kernel - Using Semantic Kernel with Ollama