Understanding and leveraging Semantic Kernel - Integrating Semantic Kernel with the MCP protocol
In the previous post, we saw that calling a separate API for each plugin can quickly become a cumbersome and error-prone task. In this post, we’ll explore how to overcome this limitation by using a dedicated protocol—MCP.
At the end of the last post, we briefly highlighted how tedious it can be to connect Semantic Kernel to each individual API. The industry’s response to this challenge is a new protocol known as MCP. It’s now time to dive deeper into what MCP is and how it addresses this issue in greater detail.
What is MCP ?
It’s best to refer directly to the official website dedicated to this protocol.
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
Model Context Protocol
So MCP is just that: it allows language models—like those used with Semantic Kernel—to interact uniformly with multiple tools or plugins without needing custom code or API adjustments for each one.
How does it work ?
We will only briefly outline the overall concept and refer interested readers to the official website for further details. Here's how we understand the protocol (disclaimer: this is our own interpretation and open to discussion): the responsibility of establishing a connection between an AI agent and external services is shifted to the data provider. Whereas previously the burden was on the AI developer to adapt to each individual API, it is now the provider’s role to conform to a standardized specification.
As is often the case with protocols, there is a server that provides a service and a client that consumes it (just as in HTTP, where the server hosts web pages and the browser requests them). The same principle applies to MCP. Let’s now take a closer look at how it works under the hood.
MCP is a very recent protocol and may still undergo changes. Moreover, certain aspects related to security remain to be clarified.
The MCP server
An MCP server is a lightweight program that exposes specific capabilities through a standardized protocol.
Model Context Protocol
An MCP server acts thus like an API provider and the MCP protocol ensures consistency. This way, an AI agent can automatically find and invoke functions by simply reading a standardized interface.
Functions are described in a way AI models can understand (often via JSON/YAML specs) (standardized interface).
Agents can discover what functions the server offers without human intervention (discoverability).
AI agents can call those functions directly based on the user’s prompt (auto-calling).
The API provider (the MCP server) handles compliance with the protocol, not the AI agent developer (decoupled integration).
MCP servers are offered by data providers or any service that can add value to the questions an LLM may need to address. A list of available providers can be found here.
The MCP client
An MCP client is a protocol client that maintains 1:1 connections with servers.
Model Context Protocol
An MCP client is an AI agent or application that consumes services exposed by an MCP server using the MCP protocol.
In simple terms:
The MCP client is the "user" or "consumer" of external capabilities.
It sends structured requests to the MCP server, asking for specific functions or data (e.g., "GetWeatherForCity").
The client relies on the standardized MCP protocol to discover, understand, and invoke these capabilities without needing custom code for each provider.
For example, a Semantic Kernel-powered AI agent could act as an MCP client. When it needs current weather data, it would look up available MCP servers exposing weather-related functions, automatically understand how to call those functions (thanks to standard metadata and schemas), call them in real time and integrate the response into its output.
How to integrate MCP in Semantic Kernel ?
Now that the MCP protocol has been described, let’s see it in action and explore how to implement it within Semantic Kernel. Once again, we will use a Console application to conduct our tests. The goal this time is to connect to an MCP server and interact with it. For our scenario, we will use the MCP server provided by Microsoft for Clarity.
Microsoft Clarity is a free behavioral analytics tool from Microsoft designed to help website owners understand how users interact with their sites. It goes beyond traditional analytics by providing features like session recordings, heatmaps, and AI-driven insights to visualize user behavior and identify usability issues.
Microsoft Clarity provides a simple MCP server (see here) that allows AI agents to access analytics data from the last three days.
- As usual, we start by setting up the kernel and selecting a language model.
1// Configure Semantic Kernel
2var deploymentName = "<your_deployment_name>";
3var endPoint = "<your_endpoint>";
4var apiKey = "<your_apikey>";
5
6// Add Azure OpenAI chat completion service
7var builder = Kernel.CreateBuilder();
8builder.AddAzureOpenAIChatCompletion(deploymentName, endPoint, apiKey);
9var kernel = builder.Build();
- Now we need to set up an MCP client. This is the purpose of the following code.
1// Create an MCPClient for the Microsoft Clarity server
2await using IMcpClient mcpClient = await McpClientFactory.CreateAsync(new StdioClientTransport(new()
3{
4 Name = "Microsoft Clarity",
5 Command = "npx",
6 Arguments = ["@microsoft/clarity-mcp-server", "--clarity_api_token=api-token-here"],
7}));
The code above is particularly interesting because it demonstrates how, with just a few lines, we can connect to a specific MCP server. The requirements are minimal: we simply execute a command-line instruction with arguments—typically outlined in the MCP server's specifications.
We are using the official .NET NuGet package for MCP (ModelContextProtocol). Please note that, at the time of writing, it is still in prerelease. Therefore, make sure to enable the option to include prerelease versions in the package search.
Note that we're using npx in this setup, which means that Node.js must be installed on the machine where the application is running.
We can observe that an API token is required to connect to the server and retrieve data. In our specific case, this token can be obtained directly from the Clarity dashboard.
- Once the client is properly configured (with the token correctly set), it becomes remarkably straightforward to integrate it into the Semantic Kernel ecosystem. In the code below, we can observe how Microsoft significantly simplifies the process by allowing the MCP client to be registered directly as a plugin.
1// Retrieve the list of tools available on the Clarity server
2var tools = await mcpClient.ListToolsAsync().ConfigureAwait(false);
3foreach (var tool in tools)
4{
5 Console.WriteLine($"{tool.Name}: {tool.Description}");
6}
7
8#pragma warning disable SKEXP0001
9kernel.Plugins.AddFromFunctions("Clarity", tools.Select(x => x.AsKernelFunction()));
10#pragma warning restore SKEXP0001
We had to include a pragma directive to suppress a blocking warning, which is necessary given that the current package is still in preview.
- The rest of the code is straightforward and follows the same pattern as in the previous posts. This seamless integration is possible because MCP operations are internally converted into plugins.
1var settings = new OpenAIPromptExecutionSettings()
2{
3#pragma warning disable SKEXP0001
4 FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(options: new() { RetainArgumentTypes = true })
5#pragma warning restore SKEXP0001
6};
7
8var prompt = "Fetch scroll depth for the past 2 days filtered by device type.";
9var result = await kernel.InvokePromptAsync(prompt, new(settings)).ConfigureAwait(false);
10Console.WriteLine($"\n\n{prompt}\n{result}");
Running the program
Here is what is ontained when we run the program.
At the time of writing, the MCP ecosystem is growing rapidly. Many IDEs now integrate it natively into their interfaces, making it a concept well worth exploring further. Its potential impact on productivity could be significant.
We will dedicate the final post to an often overlooked but crucial aspect of Semantic Kernel in production environments: logging and tracing. This is what truly makes Semantic Kernel an enterprise-grade, production-ready tool.
Understanding and leveraging Semantic Kernel - Logging and monitoring