Google
 

Friday, February 16, 2024

Changing log level for .net apps on the fly

Logging is very important to understand the behavior of an application. Logs can be used to analyze application behavior over an extended time period to understand trends or anomalies, but they're also critical to diagnose issues in production environments when the application is not behaving as expected.

How much logs an application should emit is a matter of tradeoffs. Writing too much logs may negatively impact application performance and increase data transfer and storage costs without adding value. Too few logs makes it very difficult to troubleshoot issues. This is why most logging frameworks allow configuring log levels so that the application developers can add as much logging as needed, but only logs with a specific level or below will actually be written to the destination.

The challenge is that you don't need all the logs all the time. You certainly can redeploy or reconfigure the application and restart it to change the log level, but this would be a bit disruptive. The good thig is that .net configuration system allows updating configuration values on the fly. Consider this simple web API:


var builder = WebApplication.CreateBuilder(args);

builder.Logging.AddConsole();

var app = builder.Build();

app.MapGet("/numbers", () =>
{
    app.Logger.LogDebug("Debug");
    app.Logger.LogInformation("Info");
    app.Logger.LogWarning("Warning");
    app.Logger.LogError("Error");

    return Enumerable.Range(0, 10);
});

app.Run();

With logging configuration file:

{
  "Logging": {
    "LogLevel": {
      "Default": "Error",
      "Microsoft.AspNetCore": "Warning"
    }
  }
}
When the /numbers endpoint is called, these logs are written to the console:
fail: ConfigReload[0]
      Error

This is clearly because the configured default log level is "Error". You can add a simple endpoint that changes the log level on the fly, like this:


app.MapGet("/config", (string level) => 
{
    if (app.Services.GetRequiredService<IConfiguration>() is not IConfigurationRoot configRoot)
        return;

    configRoot["Logging:LogLevel:Default"] = level;
    configRoot.Reload();
});

When you issue the GET request /config?level=Information Then invoke the /numbers endpoint again, the log output will look like:

info: ConfigReload[0]
      Info
warn: ConfigReload[0]
      Warning
fail: ConfigReload[0]
      Error

Similarly, to configure the log level to Debug, invoke /config?level=Debug. Very simple.

There are a few gotchas to consider:

  1. This the /config endpoint should be secured, only a privileged user should be able to invoke it as it changes the application behavior. I've intentionally ignored this in my example for simplicity.
  2. In case there are many instances serving the same API the /config invocation will be directed by the load balancer to only one instance of your application which most probably won't be sufficient. In this case you will need another approach to communicate with your application that the log level should be modified. One approach could be a pub-sub system that allows multiple consumers. This may be a subject of another blog post.

Another common approach for reconfiguring.net applications on the fly is by using a configuration source that refreshes automatically every specific time interval or based on config file change detection.
However the time based approach means that you have to wait until a certain time elapses for the application to reconfigure itself which may not be desirable as you want to change the log level as quickly as possible. A file change detection approach is not great for immutable deployments like container based applications or serverless functions.

Logging and monitoring are quality attributes that should be taken into consideration during the application design. In case you're not using a more advanced observability tooling that allow profiling for example then the technique proposed in this blog post may be of help.

Friday, January 12, 2024

Assertions of Equality and Equivalence

I remember that I encountered an interesting bug that was not detected by unit tests because the behaviour of the test framework did not match my expectations.
The test was supposed to verify that the contents of an array (or a list) returned by the code under test match an expected array of elements in the specific order of that expected array. The unit test was passing, however, later the team discovered a bug, and the root cause was that the array was not in the correct order! This is exactly why we write automated tests, but the test failed us.

The test, which uses FluentAssertions library basically looked like:

[Test]
public void FluentAssertions_Unordered_Pass()
{
	var actual = new List<int>  {1, 2, 3}; // SUT invocation here
	var expected = new [] {3, 2, 1};

	actual.Should().BeEquivalentTo(expected);
}
Although the order of the elements of the actual array don't match the expected, the test passes. This is not a bug in FluentAssertions. It's by design, and the solution is simple:
actual.Should().BeEquivalentTo(expected, config => config.WithStrictOrdering());

 

The config parameter enforces a specific order of the collection. It's also possible to configure this globally, when initializing the test assembly for example:

AssertionOptions.AssertEquivalencyUsing(config => config.WithStrictOrdering());

 

The default behavior of this method annoyed me. In my opinion, the test method should be strict by default. That is, it should assume that the collection should be sorted, and can be made more lenient by overriding this behavior. Not the opposite.

Probably I got into the habit of using BeEquivalentTo(), while an Equal() assertion exists, which "Expects the current collection to contain all the same elements in the same order" as it's default behavior. There are other differences between BeEquivalentTo() and Equal() that don't matter in this context. 

Similar behavior applies to Nunit assertions, although there is no way to override the equivalence behavior:

[Test]
public void NUnit_Unordered_Pass()
{
	var actual = new [] {1, 2, 3};
	var expected = List<int>  {3, 2, 1};

	Assert.That(actual, Is.EquivalentTo(expected)); // pass
	CollectionAssert.AreEquivalent(expected, actual); // pass
}
[Test]
public void NUnit_Unordered_Fail()
{
	var actual = new [] {1, 2, 3};
	var expected = new List<int> {3, 2, 1};

	Assert.That(actual, Is.EqualTo(expected)); // fail
	CollectionAssert.AreEqual(expected, actual); // fail
}

 

It's important to understand the behavior of the testing library to avoid similar mistakes. We rely on tests as our safetly net, and they better be reliable!

Friday, September 22, 2023

Handling special content with Handlebars.net Helpers

Generating formatted reports based on application data is a very common need. For example, you may want to create an HTML page with content from a receipt. This content may be sent in an HTML formatted email or converted to PDF or any other use case. To achieve this, a flexible and capable templating engine is needed to transform the application data to a human readable format.
.net has a very powerful templating engine that's used in its asp.net web framework which is Razor templates. But what if you want to use a templating engine that is simpler, and doesn't require a web stack as in the case of building background jobs, desktop or mobile applications?

 


Handlebars.net is a .net implementation of the famous HandlebarsJS templating framework. From Handlebars.net Github repository:

"Handlebars.Net doesn't use a scripting engine to run a Javascript library - it compiles Handlebars templates directly to IL bytecode. It also mimics the JS library's API as closely as possible."
For example: consider this collection of data that should be rendered as an HTML table:

var employees = new [] 
{
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-20),
        Name = "John Smith",
        Photo = new Uri("https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg")
    },
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-25),
        Name = "Jack",
        Photo = new Uri("https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg")
    },
    new Employee
    {
        BirthDate= DateTime.Now.AddYears(-40),
        Name = "Iron Man",
        Photo = new Uri("https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png")
    },
};

A Handlebars template may look like:

<html>
<body>
	<table border="1">
		<thead>
			<tr>
				<th>Name</th>
				<th>Age</th>
				<th>Photo</th>
			</tr>
		</thead>
		<tbody>
			{{#each this}}
			<tr>
				<td>{{Name}}</td>
				<td>{{BirthDate}}</td>
			</tr>
			{{/each}}
		</tbody>
	</table>

</body>
</html>

The template is fairly simple. Explaining the syntax of Handlebars templates is beyond the scope of this article. Check Handlebarjs Language Guide for information regarding its syntax.

Passing the data to the Hanledbar.net and render the template is easy:

var template = File.ReadAllText("List.handlebars");
var compiledTemplate = Handlebars.Compile(template);
var output = compiledTemplate(employees);

Console.WriteLine(output);

Line 1 reads the List.handlebars template which is stored in the same application folder, alternatively the template can be stored as an embedded resource or retrieved from a database or even created on the fly.
Line 2 compiles the template, generating a function that can be invoked later. 

Note: For good performance, the compiled template should be generated once and used multiple times during the lifetime of the application.

Line 3 invokes the function passing the employees collection and receives the rendered output in a string variable.

This is the generated HTML:

<html>
<body>
	<table border="1">
		<thead>
			<tr>
				<th>Name</th>
				<th>Age</th>
				<th>Photo</th>
			</tr>
		</thead>
		<tbody>
			<tr>
				<td>John Smith</td>
				<td>2003-09-09T22:08:23.3541971+10:00</td>
				<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg" width="200px" height="200px" /></td>
			</tr>
			<tr>
				<td>Jack</td>
				<td>1998-09-09T22:08:23.3839317+10:00</td>
				<td><img src="https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg" width="200px" height="200px" /></td>
			</tr>
			<tr>
				<td>Iron Man</td>
				<td>1983-09-09T22:08:23.3839479+10:00</td>
				<td><img src="https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png" width="200px" height="200px" /></td>
			</tr>
		</tbody>
	</table>

</body>
</html>

And this is how the output is rendered by a browser:


Putting aside lack of styling which has nothing to do with Handlebars, the output seems good but suffers for two issues:

  1. The format of the Age property is not great.
  2. The image tags rendered by the template reference the full URL of the images. Every time the generated HTML is consumed and rendered, it will have to fetch the images from their sources, which may be inconvenient. Additionally, the generated template is not self-contained, and other services that consume the generated HTML (like an HTML to PDF conversion service) will have to download the images.

Although the Handlebars has a powerful templating language, it's impossible to cover all needs that may arise, this is why Handlebars.net provides the ability to define custom helpers.
 

Custom Helpers: 

Helpers provide an extensibility mechanism to customize the rendered output. Once created and registered with Handlebars.net, they can be invoked from templates as if they were part of Handlebar's templating language.
Let's use helpers to solve the date format issue:
Handlebars.RegisterHelper("formatDate", (output, context, arguments)
                => { output.Write(((DateTime)arguments[0]).ToString(arguments[1].ToString())); });

This one-line registers a formatDate helper that takes the first argument and formats it using the second argument. To call this helper in the template:

<td>{{formatDate BirthDate "dd/MM/yyyy"}}</td>

The rendered output is much better now:


Embedding images in the HTML output

To solve the second issue mentioned above, we can write a custom helper to embed image content using the data URI scheme.
This is a basic implementation of this "embeddedImage" helper:

Handlebars.RegisterHelper("embeddedImage", (output, context, arguments) =>
{
    var url = arguments[0] as Uri;
    using var httpClient = new HttpClient();

    // add user-agent header required by Wikipedia. You should safely ommit the following line for other sources
    httpClient.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("example.com-bot", "1.0"));

    var content = httpClient.GetByteArrayAsync(url).Result;
    var encodedContent = Convert.ToBase64String(content);
    output.Write("data:image/png;base64," + encodedContent);
});

The code uses an HttpClient to download the image as a byte array, then encode it using base64 encoding, then writes the output as a data URI using the standards format. And the usage is very simple:

<img width="200px" height="200px" src="{{embeddedImage Photo}}"  />

And the HTML output looks like: (trimmed for brevity)

<img width="200px" height="200px" src="data:image/png;base64,/9j/4gIcSUNDX1BST0ZJTEUAAQEAAAIMbGNtcwIQAABtbnRyUkdCIFhZWiAH3AABABkAAwApAD.....

 

Conclusion

One of the most important design principals is the Open-Closed Principal: software entities should be open for extension but closed for modification. Handlebars and Handlebars.net apply this principal by allowing users to extend the functionality of the library without having to modify its source code, which is a good design.
With a plethora of free and commercial libraries available for developers, the level of extensibility should be one of the evaluation criteria used during the selection process.
And you, what other templating libraries have you used in .net applications? How extensible are these libraries?

Friday, June 30, 2023

Mind games of measurements and estimates: Hidden meanings behind numbers and units


I'm a fan of science and nature documentaries. A few years ago, National Geographic Abu Dhabi was my favorite channel. It primarily featured original NatGeo content, which was dubbed in Arabic.
The content variety and interesting topics from construction, to wild life, air crash investigations and even UFO; provided me with a stream of knowledge and enjoyment. But in some times, also confusion!

One source of confusion was the highly accurate numbers used to describe things that normally could not be measured to that level of accuracy!
In one instance, a wild animal was described to have a weight reaching something like 952 kilograms. Not 900, not 1000 or even 950, but exactly 952.
In another instance, a man was describing a flying object, and he mentioned that the altitude of that object was 91 meters. That man must have laser distance meters in his eyes!

When I thought about this, I figured out that probably while translating these episodes, units of measurements were converted from pounds to kilograms, from feet and yards to meters, and from miles to kilometers, and so on. This is because the metric system is used in the Arab world and is more understandable by the audience.
Converting the above numbers back to the original units made them sound more logical. The wild animal weighed approximately 2200 pounds, and the man was describing an object flying about 100 yards or 300 feet high. That made much more sense.

But why did these round figure numbers seem more logical and more acceptable when talking about things that cannot be accurately measured? After all, 2200 pound are equal to 952 kilograms, and 100 yards are 91.44 meters. Right?

Apparently, the way we perceive numbers in casual conversations implicitly associates an accuracy level.
This Wikipedia note gives an example of this:
"Sometimes, the extra zeros are used for indicating the accuracy of a measurement. For example, "15.00 m" may indicate that the measurement error is less than one centimetre (0.01 m), while "15 m" may mean that the length is roughly fifteen metres and that the error may exceed 10 centimetres."

Similarly, smaller units can be used to give a deceiving indication of accuracy. A few years ago, I was working with a colleague on a high level estimates of a software project. We used weeks as our unit of estimate because -as expected- we knew very little about the project and we expressed this in terms of coarse-grained estimates.
From experience, we knew that this level of accuracy won't be welcome by who requested the estimates, and they may want to get more accurate ones. I laughingly told my colleague: "If they want the estimates in hours, they can multiply these numbers by 40!". I feel I was mean saying that. Of course the point was the accuracy, not the unit conversion.

One nice thing about using Fibonacci numbers in relative estimates, is that they detach the numeric estimates from any perceived accuracy. When the estimate is 13 story points, it's totally clear that the only reason why it's 13, - not 12 or 14  for example- is not because we believe it to be accurately 13. It's just because we don't have the other numbers on the estimation cards. It's simply a best guess.

Beware of the effects of units and numbers you use. They may communicate more than what you originally intended.

Wednesday, May 10, 2023

Setting exit code of a .net worker application

When building a .net worker application with a hosted service based on the BackgroundService class, it's some times it's required to set the application exit code based on the outcomes of the execution of the hosted service.

One trivial way to do this is to to set the Environment.ExitCode property from the hosted service:


public class Worker : BackgroundService
{
    public Worker()
    {

    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        try
        {
            throw new Exception("Something bad happened");
        }
        catch
        {
            Environment.ExitCode = 1;
        }
    }
}

This works, however consider these unit tests:


[Test]
public async Task Test1()
{
    Worker sut = new Worker();
    await sut.StartAsync(new CancellationToken());

    Assert.That(Environment.ExitCode, Is.EqualTo(1));
}

[Test]
public void Test2()
{
    // another test
    Assert.That(Environment.ExitCode, Is.EqualTo(0));
}

Test1 passes, however Test2 fails as Environment.ExitCode is a static variable. You can reset back to zero it after the test, but this is error-prone. So what is the alternative?

One simple solution is to use a status code-holding class as a singleton and inject it into the background service:


public interface IStatusHolder
{
    public int Status { get; set; }
}

public class StatusHolder : IStatusHolder
{
    public int Status { get; set; }
}

public class Worker : BackgroundService
{
    private readonly IStatusHolder _statusHolder;

    public Worker(IStatusHolder statusHolder)
    {
        _statusHolder = statusHolder;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        try
        {
            throw new Exception("Something bad happened");
        }
        catch
        {
            _statusHolder.Status = 1;
        }
    }
}

As simple Program.cs would look like:


using EnvironmentExit;

IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices(services =>
    {
        services.AddHostedService<Worker>();
        services.AddSingleton<IStatusHolder, StatusHolder>();
    })
    .Build();

host.Start();

var statusHolder = host.Services.GetRequiredService<IStatusHolder>();
Environment.ExitCode = statusHolder.Status;

Note that line number 8 registers IStatusHolder as a singleton, which is important to maintain its state.

Now all tests pass. Additionally, when the application runs, the exit code is 1.

Friday, January 27, 2023

PowerShell core compatibility: A lesson learned the hard way

PowerShell core is my preferred scripting language. I've been excited about it since its early days. Here's a tweet from back in 2016 when PowerShell core was still in beta:

 

I've used PowerShell to automate build steps, deployments, and other tasks on both dev environments and CICD pipelines. It's great to write a script on my Windows machine, test it using PowerShell core, and run it on my docker Linux-based build environments with 100% compatibility. Or so I thought until I learned otherwise!

A few years ago, I was automating a process which required creating a folder if it didn't exist. Out of laziness, this is how I implemented this functionality: 

mkdir $folder -f

When the folder exists and the -f (or --Force) flag is passed, the command will return the existing directory object without errors. I know this is not the cleanest way -more on this later- but it works on my Windows machine, so it should also work in the docker Linux container, except that it didn't. When the script ran, it resulted in this error:

/bin/mkdir: invalid option -- 'f'
Try '/bin/mkdir --help' for more information.

Why did the behavior differ? It turns out that mkdir means different things depending on whether you're running PowerShell on Windows or Linux. And this can be observed using Get-Command Cmdlet:

# Windows:
Get-Command mkdir

The output is:

CommandType     Name                                               Version
-----------     ----                                               -------
Function        mkdir

Under Windows, mkdir is a function, and the definition of this function can be obtained using

(Get-Command mkdir).Definition

And the output is:

<#
.FORWARDHELPTARGETNAME New-Item
.FORWARDHELPCATEGORY Cmdlet
#>

[CmdletBinding(DefaultParameterSetName='pathSet',
    SupportsShouldProcess=$true,
    SupportsTransactions=$true,
    ConfirmImpact='Medium')]
    [OutputType([System.IO.DirectoryInfo])]
param(
    [Parameter(ParameterSetName='nameSet', Position=0, ValueFromPipelineByPropertyName=$true)]
    [Parameter(ParameterSetName='pathSet', Mandatory=$true, Position=0, ValueFromPipelineByPropertyName=$true)]
    [System.String[]]
    ${Path},

    [Parameter(ParameterSetName='nameSet', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [AllowNull()]
    [AllowEmptyString()]
    [System.String]
    ${Name},

    [Parameter(ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
    [System.Object]
    ${Value},

    [Switch]
    ${Force},

    [Parameter(ValueFromPipelineByPropertyName=$true)]
    [System.Management.Automation.PSCredential]
    ${Credential}
)

begin {
    $wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('New-Item', [System.Management.Automation.CommandTypes]::Cmdlet)
    $scriptCmd = {& $wrappedCmd -Type Directory @PSBoundParameters }

    $steppablePipeline = $scriptCmd.GetSteppablePipeline()
    $steppablePipeline.Begin($PSCmdlet)
}

process {
    $steppablePipeline.Process($_)
}

end {
    $steppablePipeline.End()
}

Which as you can see, wraps the New-Item Cmdlet. However under Linux, it's a different story:

# Linux:
Get-Command mkdir

Output:

CommandType     Name                                               Version
-----------     ----                                               -------
Application     mkdir                                              0.0.0.0

It's an application, and the source of this applications can be retrieved as:

(Get-Command mkdir).Source
/bin/mkdir

Now that I know the problem, the solution is easy:

New-Item -ItemType Directory $folder -Force

It's generally recommended to use Cmdlets instead of aliases or any kind of shortcuts to improve readability and portability. Unfortunately PSScriptAnalyzer - which integrates well with VSCode- will highlight this issue in scripts but only for aliases (like ls) and not for functions. AvoidUsingCmdletAliases.

I learned my lesson. However, I did it the hard way.

Sunday, June 5, 2022

Reading a file from a Docker container in .net core

In many situations it might be needed to read files from a docker container using .net code.
Docker.DotNet library is very useful to interact with docker from .net. And it provides a useful method (GetArchiveFromContainerAsync) to read files from a docker container.
When I tried to use this method to read a small csv/text file, the file content looked weird a bit. It seemed like there was an encoding issue!

When I checked the code on Github, I found that the returned data is a tarball stream. Which makes sense as Docker documentation mentions that the returned stream is a Tar stream.

To read the Tar stream, I tried to use SharpZipLib library's TarInputStream class. However, that didn't work as apparently the library requires a seekable stream while the stream contained in the GetArchiveFromContainerResponse returned from the method is not.
The workaround -which works well for relatively small files- is to copy the stream to a memory stream and use that instead.

This is a sample code:

DockerClientConfiguration config = new();
using var client = config.CreateClient();

GetArchiveFromContainerParameters parameters = new()
{ 
	Path = "/root/eula.1028.txt"
};
var file = await client.Containers.GetArchiveFromContainerAsync("example", parameters, false);

using var memoryStream = new MemoryStream();
file.Stream.CopyTo(memoryStream);
file.Stream.Close();

memoryStream.Seek(0, SeekOrigin.Begin);

using var tarInput = new TarInputStream(memoryStream, Encoding.ASCII);
tarInput.GetNextEntry();

using var reader = new StreamReader(tarInput);

var content = reader.ReadToEnd();

Console.WriteLine(content);

I hope this helps!