Understanding and leveraging Semantic Kernel - Logging and monitoring

In this final post, we will explore how to instrument an AI agent built with Semantic Kernel. As useful as an application may be, it is indeed nearly useless without proper monitoring capabilities.

Let’s recall how Microsoft defines Semantic Kernel.

Semantic Kernel is a lightweight, open-source development kit that lets you easily build AI agents and integrate the latest AI models into your C#, Python, or Java codebase. It serves as an efficient middleware that enables rapid delivery of enterprise-grade solutions.
Introduction to Semantic Kernel

Microsoft explains that this framework enables the development of enterprise-grade solutions because it offers built-in support for essential features such as security or observability. Let’s briefly explore how these capabilities are integrated.

What is observability ?

Observability refers to the ability to measure and understand the internal state of a system based on the data it produces—such as logs, metrics, and traces. In software systems, observability helps teams:

  • It detect issues (e.g., performance bottlenecks, failures).
  • It diagnoses root causes quickly.
  • It allows us to understand system behavior in real time and over time.

Observability is based on three pillars.

  • Logs give detailed, time-stamped records of events (e.g., error messages, status updates).
  • Metrics are numerical data points collected over time (e.g., CPU usage, request latency).
  • Traces record of the path a request takes through a system, often used in distributed systems to see how components interact.
Information

Observability has become a major focus in today's software landscape and is a critical concern for many companies. Entire businesses have been built around it—well-known tools like Datadog, Application Insights, and others are dedicated to providing visibility into system performance and behavior.

Semantic Kernel follows the OpenTelemetry Semantic Convention for observability. This means that the logs, metrics, and traces emitted by Semantic Kernel are structured and follow a common schema. This ensures that you can more effectively analyze the telemetry data emitted by Semantic Kernel.
Observability in Semantic Kernel

This last point is particularly important, as it confirms that Semantic Kernel is designed to integrate seamlessly with industry-standard tools like Datadog. It can emit telemetry and logging data in a standardized and consistent format, making it easy to visualize and monitor within existing dashboards. As a result, Semantic Kernel can be effortlessly incorporated into an enterprise-grade architecture.

Implementing observability in Semantic Kernel in a simple scenario

As a straightforward scenario, we’ll revisit our previous example where we fetched data from Clarity using an MCP server. By changing just a single line of code, we’ll intentionally introduce a glitch into the system.

The current code is as follows.

1var settings = new OpenAIPromptExecutionSettings()
2{    
3#pragma warning disable SKEXP0001
4    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(options: new() { RetainArgumentTypes = true })
5#pragma warning restore SKEXP0001
6};

We will modify the code as shown below.

1var settings = new OpenAIPromptExecutionSettings()
2{
3    ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
4};

This seemingly minor modification—enabling automatic function invocation—might not appear significant at first glance. However, upon running the program, we encounter the following unexpected result:

This message is quite cryptic, making it hard to pinpoint the exact issue. Is the problem with the MCP server, or is it on the Semantic Kernel side? It’s difficult to say. In situations like this, a developer really needs more detailed information to diagnose the problem effectively.
While Semantic Kernel can easily send data to industry-standard tools, for simplicity in our scenario, we will limit ourselves to exporting some logging data directly to the console. This setup can be effortlessly adapted later to integrate with more advanced monitoring tools.

We will therefore take the code from the previous post and modify it to include helpful traces and logs.

Adding monitoring in the console

Semantic Kernel relies on exporters to properly send telemetry data. An exporter is simply a dedicated library designed to transmit data to a specific endpoint or monitoring system.

  • Add the OpenTelemetry.Exporter.Console package to the solution.

  • Instantiate the kernel as before.

1// Configure Semantic Kernel
2var deploymentName = "<your_deployment_name>";
3var endPoint = "<your_endpoint>"; 
4var apiKey = "<your_apikey>";
5
6// Add Azure OpenAI chat completion service
7var builder = Kernel.CreateBuilder();
8builder.AddAzureOpenAIChatCompletion(deploymentName, endPoint, apiKey);
9var kernel = builder.Build();
  • Add the following code to instruct the kernel to send telemetry data.
 1var resourceBuilder = ResourceBuilder
 2    .CreateDefault()
 3    .AddService("TelemetryConsoleQuickstart");
 4
 5// Enable model diagnostics with sensitive data.
 6AppContext.SetSwitch("Microsoft.SemanticKernel.Experimental.GenAI.EnableOTelDiagnosticsSensitive", true);
 7
 8using var traceProvider = Sdk.CreateTracerProviderBuilder()
 9    .SetResourceBuilder(resourceBuilder)
10    .AddSource("Microsoft.SemanticKernel*")
11    .AddConsoleExporter()
12    .Build();
13
14using var meterProvider = Sdk.CreateMeterProviderBuilder()
15    .SetResourceBuilder(resourceBuilder)
16    .AddMeter("Microsoft.SemanticKernel*")
17    .AddConsoleExporter()
18.Build();
19
20using var loggerFactory = LoggerFactory.Create(builder =>
21{
22    // Add OpenTelemetry as a logging provider
23    builder.AddOpenTelemetry(options =>
24    {
25        options.SetResourceBuilder(resourceBuilder);
26        options.AddConsoleExporter();
27        // Format log messages. This is default to false.
28        options.IncludeFormattedMessage = true;
29        options.IncludeScopes = true;
30    });
31    builder.SetMinimumLevel(LogLevel.Debug);
32});
Information

To be honest, this code closely follows the example recommended by Microsoft on its official site.

In short, this code initializes metrics, traces, and logs, and makes them available in the console.

Running the program

We can observe here that we now have access to much more detailed information. Yes, the error is still present (as expected), but it's now clear that the issue lies in the format of the parameters—such as a string being sent instead of a number, and so on.

Thus, with just a few lines of code, we now gain clear visibility into what is happening within our application. In this regard, Semantic Kernel truly stands out as an enterprise-grade solution.

Final thoughts

If you wish to delve deeper into this topic, acquire the following book, which encompasses all the concepts emphasized in this series and delve into more advanced ones.

Building AI Applications with Microsoft Semantic Kernel (Meyer)

Do not hesitate to contact me shoud you require further information.