tag:blogger.com,1999:blog-24964158916652630002024-03-10T04:46:31.398+02:00For the Love of SoftwareHesham A. Amin's blog about his love..SoftwareHesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.comBlogger109125tag:blogger.com,1999:blog-2496415891665263000.post-13816807952514950532024-02-16T12:41:00.002+02:002024-02-16T12:41:09.552+02:00Changing log level for .net apps on the fly<p>Logging is very important to understand the behavior of an application. Logs can be used to analyze application behavior over an extended time period to understand trends or anomalies, but they're also critical to diagnose issues in production environments when the application is not behaving as expected.<br /></p><p>How much logs an application should emit is a matter of tradeoffs. Writing too much logs may negatively impact application performance and increase data transfer and storage costs without adding value. Too few logs makes it very difficult to troubleshoot issues. This is why most logging frameworks allow configuring log levels so that the application developers can add as much logging as needed, but only logs with a specific level or below will actually be written to the destination.</p><p>The challenge is that you don't need all the logs all the time. You certainly can redeploy or reconfigure the application and restart it to change the log level, but this would be a bit disruptive. The good thig is that .net configuration system allows updating configuration values on the fly. Consider this simple web API:</p><p><br /></p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);
builder.Logging.AddConsole();
var app = builder.Build();
app.MapGet("/numbers", () =>
{
app.Logger.LogDebug("Debug");
app.Logger.LogInformation("Info");
app.Logger.LogWarning("Warning");
app.Logger.LogError("Error");
return Enumerable.Range(0, 10);
});
app.Run();
</code></pre><p>With logging configuration file:</p>
<pre><code class="language-json">{
"Logging": {
"LogLevel": {
"Default": "Error",
"Microsoft.AspNetCore": "Warning"
}
}
}
</code></pre>When the <code>/numbers</code> endpoint is called, these logs are written to the console:
<pre><code class="language-http">fail: ConfigReload[0]
Error
</code></pre><p>This is clearly because the configured default log level is "Error". You can add a simple endpoint that changes the log level on the fly, like this:</p><p><br /></p>
<pre><code class="language-csharp">app.MapGet("/config", (string level) =>
{
if (app.Services.GetRequiredService<IConfiguration>() is not IConfigurationRoot configRoot)
return;
configRoot["Logging:LogLevel:Default"] = level;
configRoot.Reload();
});</code></pre><p>When you issue the GET request <code>/config?level=Information</code> Then invoke the <code>/numbers</code> endpoint again, the log output will look like:</p>
<pre><code class="language-http">info: ConfigReload[0]
Info
warn: ConfigReload[0]
Warning
fail: ConfigReload[0]
Error
</code></pre><p>
Similarly, to configure the log level to Debug, invoke <code>/config?level=Debug</code>. Very simple.</p><p>There are a few gotchas to consider:</p><ol style="text-align: left;"><li>This the /config endpoint should be secured, only a privileged user should be able to invoke it as it changes the application behavior. I've intentionally ignored this in my example for simplicity.</li><li>In case there are many instances serving the same API the /config invocation will be directed by the load balancer to only one instance of your application which most probably won't be sufficient. In this case you will need another approach to communicate with your application that the log level should be modified. One approach could be a pub-sub system that allows multiple consumers. This may be a subject of another blog post.</li></ol><p>Another common approach for reconfiguring.net applications on the fly is by using a configuration source that refreshes automatically every specific time interval or based on config file change detection. <br />However the time based approach means that you have to wait until a certain time elapses for the application to reconfigure itself which may not be desirable as you want to change the log level as quickly as possible. A file change detection approach is not great for immutable deployments like container based applications or serverless functions.</p><p>Logging and monitoring are quality attributes that should be taken into consideration during the application design. In case you're not using a more advanced observability tooling that allow profiling for example then the technique proposed in this blog post may be of help.<br /></p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-71017937624993152412024-01-12T12:05:00.000+02:002024-01-12T12:05:21.601+02:00Assertions of Equality and Equivalence<p>I remember that I encountered an interesting bug that was not detected by unit tests because the behaviour of the test framework did not match my expectations.<br />The test was supposed to verify that the contents of an array (or a list) returned by the code under test match an expected array of elements in the specific order of that expected array. The unit test was passing, however, later the team discovered a bug, and the root cause was that the array was not in the correct order! This is exactly why we write automated tests, but the test failed us.<br /></p><p>The test, which uses <a href="https://fluentassertions.com/" target="_blank">FluentAssertions</a> library basically looked like:</p>
<pre><code class="language-csharp">[Test]
public void FluentAssertions_Unordered_Pass()
{
var actual = new List<int> {1, 2, 3}; // SUT invocation here
var expected = new [] {3, 2, 1};
actual.Should().BeEquivalentTo(expected);
}
</code></pre>
Although the order of the elements of the actual array don't match the expected, the test passes. This is not a bug in FluentAssertions. It's by design, and the solution is simple:
<pre><code class="language-csharp">actual.Should().BeEquivalentTo(expected, config => config.WithStrictOrdering());
</code></pre><p> </p><p>The config parameter enforces a specific order of the collection. It's also possible to configure this globally, when initializing the test assembly for example:
</p><pre><code class="language-csharp">AssertionOptions.AssertEquivalencyUsing(config => config.WithStrictOrdering());
</code></pre><p> </p>
<p>The default behavior of this method annoyed me. In my opinion, the test method should be strict by default. That is, it should assume that the collection should be sorted, and can be made more lenient by overriding this behavior. Not the opposite.</p><p>Probably I got into the habit of using <code>BeEquivalentTo()</code>, while an <code>Equal()</code> assertion exists, which "Expects the current collection to contain all the same elements in the same order" as it's default behavior. There are other differences between <code>BeEquivalentTo()</code> and <code>Equal()</code> that don't matter in this context. </p><p>
</p><p>Similar behavior applies to Nunit assertions, although there is no way to override the equivalence behavior:</p>
<pre><code class="language-csharp">[Test]
public void NUnit_Unordered_Pass()
{
var actual = new [] {1, 2, 3};
var expected = List<int> {3, 2, 1};
Assert.That(actual, Is.EquivalentTo(expected)); // pass
CollectionAssert.AreEquivalent(expected, actual); // pass
}
</code></pre>
<pre><code class="language-csharp">[Test]
public void NUnit_Unordered_Fail()
{
var actual = new [] {1, 2, 3};
var expected = new List<int> {3, 2, 1};
Assert.That(actual, Is.EqualTo(expected)); // fail
CollectionAssert.AreEqual(expected, actual); // fail
}
</code></pre><p> </p><p>It's important to understand the behavior of the testing library to avoid similar mistakes. We rely on tests as our safetly net, and they better be reliable!
</p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-43676139498347614912023-09-22T14:12:00.004+02:002023-09-22T14:18:49.997+02:00Handling special content with Handlebars.net Helpers<p>Generating formatted reports based on application data is a very common need. For example, you may want to create an HTML page with content from a receipt. This content may be sent in an HTML formatted email or converted to PDF or any other use case. To achieve this, a flexible and capable templating engine is needed to transform the application data to a human readable format.<br />.net has a very powerful templating engine that's used in its asp.net web framework which is Razor templates. But what if you want to use a templating engine that is simpler, and doesn't require a web stack as in the case of building background jobs, desktop or mobile applications?</p><p> </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcGswphhxCStflt5FBVBgKsyv1MHHrLXhvFONloWbXDGGl6y8QVJaEFninlpnoHVojn0rjmR69qM8O1HnmCJZZFlxAT-ZcvAj_k2HHao8l8zTdqkyLfDUl3QS9NK-MfrUGbC-yrbV8w4MxWroJEA9jvbfyq9MzSzHaoqq5pINYRz-w8eKlYKvrQdQzZuM/s308/Screenshot%202023-09-22%20221733.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="222" data-original-width="308" height="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcGswphhxCStflt5FBVBgKsyv1MHHrLXhvFONloWbXDGGl6y8QVJaEFninlpnoHVojn0rjmR69qM8O1HnmCJZZFlxAT-ZcvAj_k2HHao8l8zTdqkyLfDUl3QS9NK-MfrUGbC-yrbV8w4MxWroJEA9jvbfyq9MzSzHaoqq5pINYRz-w8eKlYKvrQdQzZuM/s1600/Screenshot%202023-09-22%20221733.jpg" width="308" /></a></div><br /><p></p><p><a href="http://Handlebars.net" target="_blank">Handlebars.net</a> is a .net implementation of the famous <a href="https://handlebarsjs.com/" target="_blank">HandlebarsJS</a> templating framework. From Handlebars.net Github repository:<br /></p><blockquote>"Handlebars.Net doesn't use a scripting engine to run a Javascript library - it compiles Handlebars templates directly to IL bytecode. It also mimics the JS library's API as closely as possible." </blockquote>For
example: consider this collection of data that should be rendered as an HTML
table:<p></p>
<pre><code class="language-csharp">var employees = new []
{
new Employee
{
BirthDate= DateTime.Now.AddYears(-20),
Name = "John Smith",
Photo = new Uri("https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg")
},
new Employee
{
BirthDate= DateTime.Now.AddYears(-25),
Name = "Jack",
Photo = new Uri("https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg")
},
new Employee
{
BirthDate= DateTime.Now.AddYears(-40),
Name = "Iron Man",
Photo = new Uri("https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png")
},
};
</code></pre>
<p>A Handlebars template may look like:</p>
<pre><code class="language-html"><html>
<body>
<table border="1">
<thead>
<tr>
<th>Name</th>
<th>Age</th>
<th>Photo</th>
</tr>
</thead>
<tbody>
{{#each this}}
<tr>
<td>{{Name}}</td>
<td>{{BirthDate}}</td>
</tr>
{{/each}}
</tbody>
</table>
</body>
</html>
</code></pre>
<p>The template is fairly simple. Explaining the syntax of Handlebars templates is
beyond the scope of this article. Check <a href="https://handlebarsjs.com/guide/">Handlebarjs Language Guide</a> for
information regarding its syntax.</p><p>Passing
the data to the Hanledbar.net and render the template is easy:</p>
<pre><code class="language-csharp line-numbers">var template = File.ReadAllText("List.handlebars");
var compiledTemplate = Handlebars.Compile(template);
var output = compiledTemplate(employees);
Console.WriteLine(output);
</code></pre>
<p>Line 1 reads the List.handlebars template which is stored in the same application folder, alternatively the template can be stored as an embedded resource or retrieved from a database or even created on the fly.<br />Line 2 compiles the template, generating a function that can be invoked later. </p><p><i><b>Note</b>: For good performance, the compiled template should be generated once and used multiple times during the lifetime of the application.</i><br /></p><p>Line 3 invokes the function passing the employees collection and receives the rendered output in a string variable.<br /></p><p>This is the generated HTML:</p><p></p><p>
</p>
<pre><code class="language-html"><html>
<body>
<table border="1">
<thead>
<tr>
<th>Name</th>
<th>Age</th>
<th>Photo</th>
</tr>
</thead>
<tbody>
<tr>
<td>John Smith</td>
<td>2003-09-09T22:08:23.3541971+10:00</td>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/29/Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg/800px-Houghton_STC_22790_-_Generall_Historie_of_Virginia%2C_New_England%2C_and_the_Summer_Isles%2C_John_Smith.jpg" width="200px" height="200px" /></td>
</tr>
<tr>
<td>Jack</td>
<td>1998-09-09T22:08:23.3839317+10:00</td>
<td><img src="https://upload.wikimedia.org/wikipedia/commons/e/ec/Jack_Nicholson_2001.jpg" width="200px" height="200px" /></td>
</tr>
<tr>
<td>Iron Man</td>
<td>1983-09-09T22:08:23.3839479+10:00</td>
<td><img src="https://upload.wikimedia.org/wikipedia/en/4/47/Iron_Man_%28circa_2018%29.png" width="200px" height="200px" /></td>
</tr>
</tbody>
</table>
</body>
</html>
</code></pre>
<p>And this is how the output is rendered by a browser:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha2QtxJaeQDEmKzP11iFUDPSMdWti20LLfNjUdeOxVFG2VHXxPqoA-Vjtff9GYsMlfTz3w1T9D5ILggk-kYZ0oNo3Sf0MI9eXweirQjN5xqTnEVyZwuuGZESZ0RSczYvEhfvnxf8vGas8GOLUyq1bBzIigOtwMYlAX5A5257c6wFnClpIbU0AmGwBEFAA/s646/1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="646" data-original-width="541" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha2QtxJaeQDEmKzP11iFUDPSMdWti20LLfNjUdeOxVFG2VHXxPqoA-Vjtff9GYsMlfTz3w1T9D5ILggk-kYZ0oNo3Sf0MI9eXweirQjN5xqTnEVyZwuuGZESZ0RSczYvEhfvnxf8vGas8GOLUyq1bBzIigOtwMYlAX5A5257c6wFnClpIbU0AmGwBEFAA/s16000/1.png" /></a></div><br />Putting aside lack of styling which has nothing to do with Handlebars, the output seems good but suffers for two issues:<br /><p></p><ol style="text-align: left;"><li>The format of the Age property is not great.</li><li>The image tags rendered by the template reference the full URL of the images. Every time the generated HTML is consumed and rendered, it will have to fetch the images from their sources, which may be inconvenient. Additionally, the generated template is not self-contained, and other services that consume the generated HTML (like an HTML to PDF conversion service) will have to download the images.</li></ol><p>Although the Handlebars has a powerful templating language, it's impossible to cover all needs that may arise, this is why Handlebars.net provides the ability to define custom helpers.<br /> </p><h4 style="text-align: left;">Custom Helpers: </h4><div style="text-align: left;">Helpers provide an extensibility mechanism to customize the rendered output. Once created and registered with Handlebars.net, they can be invoked from templates as if they were part of Handlebar's templating language.<br />Let's use helpers to solve the date format issue:</div><div style="text-align: left;">
<pre><code class="language-csharp">Handlebars.RegisterHelper("formatDate", (output, context, arguments)
=> { output.Write(((DateTime)arguments[0]).ToString(arguments[1].ToString())); });
</code></pre>
<br /></div><div style="text-align: left;">This one-line registers a formatDate helper that takes the first argument and formats it using the second argument. To call this helper in the template:<br /><br /></div>
<pre><code class="language-html"><td>{{formatDate BirthDate "dd/MM/yyyy"}}</td></code></pre><p>The rendered output is much better now:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYOiTiDNk3xywfdwcSdIAVbIaS82lze2CYhFW_jupZ_23DSkDc8WqjK2Y9ZLMeVQPdWzMcuzEgBzz8zuQniiV0TbTGqcy0Td9komdODrxs2Gb-wIX-ZyEogXyR-d1rnmRsNeqGTWDNROasZIuaDFwiKCyRriRcCdpTkBc5o2xpGZ1yR7nYmZXSQ8Oabgo/s646/2.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="646" data-original-width="367" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYOiTiDNk3xywfdwcSdIAVbIaS82lze2CYhFW_jupZ_23DSkDc8WqjK2Y9ZLMeVQPdWzMcuzEgBzz8zuQniiV0TbTGqcy0Td9komdODrxs2Gb-wIX-ZyEogXyR-d1rnmRsNeqGTWDNROasZIuaDFwiKCyRriRcCdpTkBc5o2xpGZ1yR7nYmZXSQ8Oabgo/s16000/2.png" /></a></div><br /><p></p><h4 style="text-align: left;">Embedding images in the HTML output</h4><div style="text-align: left;">To solve the second issue mentioned above, we can write a custom helper to embed image content using the <a href="https://en.wikipedia.org/wiki/Data_URI_scheme" target="_blank">data URI scheme</a>.<br />This is a basic implementation of this "embeddedImage" helper:</div><div style="text-align: left;"><br /></div>
<pre><code class="language-csharp">Handlebars.RegisterHelper("embeddedImage", (output, context, arguments) =>
{
var url = arguments[0] as Uri;
using var httpClient = new HttpClient();
// add user-agent header required by Wikipedia. You should safely ommit the following line for other sources
httpClient.DefaultRequestHeaders.UserAgent.Add(new ProductInfoHeaderValue("example.com-bot", "1.0"));
var content = httpClient.GetByteArrayAsync(url).Result;
var encodedContent = Convert.ToBase64String(content);
output.Write("data:image/png;base64," + encodedContent);
});
</code></pre><p>The
code uses an HttpClient to download the image as a byte array, then encode it
using base64 encoding, then writes the output as a data URI using the standards
format. And the usage is very simple:</p>
<pre><code class="language-html"><img width="200px" height="200px" src="{{embeddedImage Photo}}" /></code></pre>
<p>And the HTML output looks like: (trimmed for brevity)</p>
<pre><code class="language-html"><img width="200px" height="200px" src="data:image/png;base64,/9j/4gIcSUNDX1BST0ZJTEUAAQEAAAIMbGNtcwIQAABtbnRyUkdCIFhZWiAH3AABABkAAwApAD.....</code></pre>
<h4 style="text-align: left;"> </h4><h4 style="text-align: left;">Conclusion <br /></h4><p>One of the most important design principals is the <a href="https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle" target="_blank">Open-Closed Principal</a>: software entities should be open for extension but closed for modification. Handlebars and Handlebars.net apply this principal by allowing users to extend the functionality of the library without having to modify its source code, which is a good design. <br />With a plethora of free and commercial libraries available for developers, the level of extensibility should be one of the evaluation criteria used during the selection process.<br />And you, what other templating libraries have you used in .net applications? How extensible are these libraries? <br /></p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-25860592874315067652023-06-30T14:01:00.000+02:002023-06-30T14:01:18.314+02:00Mind games of measurements and estimates: Hidden meanings behind numbers and units<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQN4qIsoyLT6hPqU8rO9szCgjsJWvhSFMU3UQBjdeh9c0Ne8NoATUX51Qi3662qscrajoY_J12vNLNHqXUENomW9q0Mos2aOZiw7c-sltd9I0Bz-ogYpIQL-Jep-q7GmZb8lonwTOHZ34fF6hK8w7U-Oia2gtl49H3YxcswWu3tAjI5YHRH0PM0rUBGek/s960/measurement-1476913_960_720.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="640" data-original-width="960" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQN4qIsoyLT6hPqU8rO9szCgjsJWvhSFMU3UQBjdeh9c0Ne8NoATUX51Qi3662qscrajoY_J12vNLNHqXUENomW9q0Mos2aOZiw7c-sltd9I0Bz-ogYpIQL-Jep-q7GmZb8lonwTOHZ34fF6hK8w7U-Oia2gtl49H3YxcswWu3tAjI5YHRH0PM0rUBGek/w433-h288/measurement-1476913_960_720.jpg" width="433" /></a></div><br />I'm a fan of science and nature documentaries. A few years ago, National Geographic Abu Dhabi was my favorite channel. It primarily featured original NatGeo content, which was dubbed in Arabic.<br />The content variety and interesting topics from construction, to wild life, air crash investigations and even UFO; provided me with a stream of knowledge and enjoyment. But in some times, also confusion!<br /><br />One source of confusion was the highly accurate numbers used to describe things that normally could not be measured to that level of accuracy!<br />In one instance, a wild animal was described to have a weight reaching something like 952 kilograms. Not 900, not 1000 or even 950, but exactly 952.<br />In another instance, a man was describing a flying object, and he mentioned that the altitude of that object was 91 meters. That man must have laser distance meters in his eyes!<br /><br />When I thought about this, I figured out that probably while translating these episodes, units of measurements were converted from pounds to kilograms, from feet and yards to meters, and from miles to kilometers, and so on. This is because the metric system is used in the Arab world and is more understandable by the audience.<br />Converting the above numbers back to the original units made them sound more logical. The wild animal weighed approximately 2200 pounds, and the man was describing an object flying about 100 yards or 300 feet high. That made much more sense.<br /><br />But why did these round figure numbers seem more logical and more acceptable when talking about things that cannot be accurately measured? After all, 2200 pound are equal to 952 kilograms, and 100 yards are 91.44 meters. Right?<br /><br />Apparently, the way we perceive numbers in casual conversations implicitly associates an accuracy level.<br />This <a href="https://en.wikipedia.org/w/index.php?title=Decimal&oldid=1162494076#cite_note-8" target="_blank">Wikipedia note</a> gives an example of this:<br />"Sometimes, the extra zeros are used for indicating the accuracy of a measurement. For example, "15.00 m" may indicate that the measurement error is less than one centimetre (0.01 m), while "15 m" may mean that the length is roughly fifteen metres and that the error may exceed 10 centimetres."<br /><br />Similarly, smaller units can be used to give a deceiving indication of accuracy. A few years ago, I was working with a colleague on a high level estimates of a software project. We used weeks as our unit of estimate because -as expected- we knew very little about the project and we expressed this in terms of coarse-grained estimates.<br />From experience, we knew that this level of accuracy won't be welcome by who requested the estimates, and they may want to get more accurate ones. I laughingly told my colleague: "If they want the estimates in hours, they can multiply these numbers by 40!". I feel I was mean saying that. Of course the point was the accuracy, not the unit conversion.<br /><br />One nice thing about using Fibonacci numbers in relative estimates, is that they detach the numeric estimates from any perceived accuracy. When the estimate is 13 story points, it's totally clear that the only reason why it's 13, - not 12 or 14 for example- is not because we believe it to be accurately 13. It's just because we don't have the other numbers on the estimation cards. It's simply a best guess.<br /><br />Beware of the effects of units and numbers you use. They may communicate more than what you originally intended.<br /><br /><p></p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-26603754766645490202023-05-10T13:37:00.004+02:002023-05-10T13:38:26.389+02:00Setting exit code of a .net worker application<p>When building a .net worker application with a hosted service based on the <code>BackgroundService</code> class, it's some times it's required to set the application exit code based on the outcomes of the execution of the hosted service.</p><p>One trivial way to do this is to to set the <code>Environment.ExitCode</code> property from the hosted service:</p><p></p>
<pre><code class="language-csharp">
public class Worker : BackgroundService
{
public Worker()
{
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
throw new Exception("Something bad happened");
}
catch
{
Environment.ExitCode = 1;
}
}
}
</code></pre>
<p>This works, however consider these unit tests:</p><p></p>
<pre><code class="language-csharp">
[Test]
public async Task Test1()
{
Worker sut = new Worker();
await sut.StartAsync(new CancellationToken());
Assert.That(Environment.ExitCode, Is.EqualTo(1));
}
[Test]
public void Test2()
{
// another test
Assert.That(Environment.ExitCode, Is.EqualTo(0));
}
</code></pre>
<p>
<code>Test1</code> passes, however <code>Test2</code> fails as <code>Environment.ExitCode</code> is a static variable. You can reset back to zero it after the test, but this is error-prone. So what is the alternative?</p><p>One simple solution is to use a status code-holding class as a singleton and inject it into the background service: <br /></p>
<pre><code class="language-csharp">
public interface IStatusHolder
{
public int Status { get; set; }
}
public class StatusHolder : IStatusHolder
{
public int Status { get; set; }
}
</code></pre>
<pre><code class="language-csharp">
public class Worker : BackgroundService
{
private readonly IStatusHolder _statusHolder;
public Worker(IStatusHolder statusHolder)
{
_statusHolder = statusHolder;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
throw new Exception("Something bad happened");
}
catch
{
_statusHolder.Status = 1;
}
}
}
</code></pre>
<p>As simple <code>Program.cs</code> would look like:</p>
<pre><code class="line-numbers language-csharp">
using EnvironmentExit;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
services.AddSingleton<IStatusHolder, StatusHolder>();
})
.Build();
host.Start();
var statusHolder = host.Services.GetRequiredService<IStatusHolder>();
Environment.ExitCode = statusHolder.Status;
</code></pre>
<p>Note that line number 8 registers <code>IStatusHolder</code> as a singleton, which is important to maintain its state. <br /></p><p>Now all tests pass. Additionally, when the application runs, the exit code is 1. </p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-62756538662112421342023-01-27T14:12:00.004+02:002023-01-27T14:13:41.545+02:00PowerShell core compatibility: A lesson learned the hard way<div style="text-align: left;"><p style="text-align: left;">PowerShell
core is my preferred scripting language. I've been excited about it since its
early days. Here's a tweet from back in 2016 when PowerShell core was still in
beta:</p></div><p> </p>
<blockquote class="twitter-tweet" data-theme="light"><p dir="ltr" lang="en">Running <a href="https://twitter.com/hashtag/PowerShell?src=hash&ref_src=twsrc%5Etfw">#PowerShell</a> on <a href="https://twitter.com/hashtag/bash?src=hash&ref_src=twsrc%5Etfw">#bash</a> on <a href="https://twitter.com/hashtag/Ubuntu?src=hash&ref_src=twsrc%5Etfw">#Ubuntu</a> on <a href="https://twitter.com/hashtag/Windows10?src=hash&ref_src=twsrc%5Etfw">#Windows10</a> . Just because I can :) <a href="https://t.co/VlBppczZ6i">pic.twitter.com/VlBppczZ6i</a></p>— Hesham A. Amin (@HeshamAmin) <a href="https://twitter.com/HeshamAmin/status/766614271149109249?ref_src=twsrc%5Etfw">August 19, 2016</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<p> I've
used PowerShell to automate build steps, deployments, and other tasks on both
dev environments and CICD pipelines. It's great to write a script on my Windows
machine, test it using PowerShell core, and run it on my docker Linux-based
build environments with 100% compatibility. Or so I thought until I learned
otherwise!</p>
<p>A few years ago, I was automating a process which required creating a folder if it didn't exist. Out of laziness, this is how I implemented this functionality: </p>
<pre><code class="line-numbers language-powershell">mkdir $folder -f</code></pre>
<p>When the folder exists and the -f (or --Force) flag is passed, the command will return the existing directory object without errors. I
know this is not the cleanest way -more on this later- but it works on my
Windows machine, so it should also work in the docker Linux container, except
that it didn't. When the script ran, it resulted in this error:</p>
<pre><code class="language-powershell">/bin/mkdir: invalid option -- 'f'
Try '/bin/mkdir --help' for more information.</code></pre>
<p>Why did the behavior differ? It turns out that mkdir means different things depending on whether you're running PowerShell on Windows or Linux. And this can be observed using Get-Command Cmdlet:</p>
<pre><code class="line-numbers language-powershell"># Windows:
Get-Command mkdir</code></pre><p>The output is: <br /></p>
<pre><code class="language-powershell">CommandType Name Version
----------- ---- -------
Function mkdir</code></pre>
<p>Under Windows, mkdir is a function, and the definition of this function can be obtained using</p>
<pre><code class="language-powershell">(Get-Command mkdir).Definition</code></pre>
<p>And the output is:</p>
<pre><code class="line-numbers language-powershell"><#
.FORWARDHELPTARGETNAME New-Item
.FORWARDHELPCATEGORY Cmdlet
#>
[CmdletBinding(DefaultParameterSetName='pathSet',
SupportsShouldProcess=$true,
SupportsTransactions=$true,
ConfirmImpact='Medium')]
[OutputType([System.IO.DirectoryInfo])]
param(
[Parameter(ParameterSetName='nameSet', Position=0, ValueFromPipelineByPropertyName=$true)]
[Parameter(ParameterSetName='pathSet', Mandatory=$true, Position=0, ValueFromPipelineByPropertyName=$true)]
[System.String[]]
${Path},
[Parameter(ParameterSetName='nameSet', Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
[AllowNull()]
[AllowEmptyString()]
[System.String]
${Name},
[Parameter(ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]
[System.Object]
${Value},
[Switch]
${Force},
[Parameter(ValueFromPipelineByPropertyName=$true)]
[System.Management.Automation.PSCredential]
${Credential}
)
begin {
$wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('New-Item', [System.Management.Automation.CommandTypes]::Cmdlet)
$scriptCmd = {& $wrappedCmd -Type Directory @PSBoundParameters }
$steppablePipeline = $scriptCmd.GetSteppablePipeline()
$steppablePipeline.Begin($PSCmdlet)
}
process {
$steppablePipeline.Process($_)
}
end {
$steppablePipeline.End()
}
</code></pre><p>Which
as you can see, wraps the New-Item Cmdlet. However
under Linux, it's a different story:</p>
<pre><code class="language-powershell"># Linux:
Get-Command mkdir
</code></pre>
<p>Output:</p>
<pre><code class="language-powershell">CommandType Name Version
----------- ---- -------
Application mkdir 0.0.0.0
</code></pre>
<p>It's an application, and the source of this applications can be retrieved as:</p>
<pre><code class="language-powershell">(Get-Command mkdir).Source
</code></pre>
<pre><code class="language-powershell">/bin/mkdir</code></pre>
<p>Now that I know the problem, the solution is easy:</p>
<pre><code class="language-powershell">New-Item -ItemType Directory $folder -Force</code></pre>
<p>It's generally recommended to use Cmdlets instead of aliases or any kind of
shortcuts to improve readability and portability. Unfortunately
<a href="https://learn.microsoft.com/en-us/powershell/module/psscriptanalyzer/?view=ps-modules" target="_blank">PSScriptAnalyzer</a> - which integrates well with VSCode- will highlight this issue
in scripts but only for aliases (like ls) and not for functions. <a href="https://learn.microsoft.com/en-us/powershell/utility-modules/psscriptanalyzer/rules/avoidusingcmdletaliases?view=ps-modules">AvoidUsingCmdletAliases</a>.</p>
<p>I learned my lesson. However, I did it the hard way. </p>Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-43965459428890593422022-06-05T12:34:00.006+02:002022-06-05T12:35:39.424+02:00Reading a file from a Docker container in .net core<p>In many situations it might be needed to read files from a docker container using .net code.<br /><a href="https://www.nuget.org/packages/Docker.DotNet/" target="_blank">Docker.DotNet</a> library is very useful to interact with docker from .net. And it provides a useful method (<code>GetArchiveFromContainerAsync</code>) to read files from a docker container.<br />When I tried to use this method to read a small csv/text file, the file content looked weird a bit. It seemed like there was an encoding issue!</p><p>When I checked the <a href="https://github.com/dotnet/Docker.DotNet/blob/f58748616cc5b679b25496926c5688294c94d850/src/Docker.DotNet/Endpoints/IContainerOperations.cs" target="_blank">code on Github</a>, I found that the returned data is a tarball stream. Which makes sense as <a href="https://docs.docker.com/engine/api/v1.21/ " target="_blank">Docker documentation</a> mentions that the returned stream is a Tar stream.<br /></p><p>To read the Tar stream, I tried to use <a href="https://www.nuget.org/packages/SharpZipLib/" target="_blank">SharpZipLib</a> library's <code>TarInputStream</code> class. However, that didn't work as apparently the library requires a seekable stream while the stream contained in the <code>GetArchiveFromContainerResponse</code> returned from the method is not.<br />The workaround -which works well for relatively small files- is to copy the stream to a memory stream and use that instead.</p><p></p><p>This is a sample code:</p>
<pre><code class="line-numbers language-csharp">DockerClientConfiguration config = new();
using var client = config.CreateClient();
GetArchiveFromContainerParameters parameters = new()
{
Path = "/root/eula.1028.txt"
};
var file = await client.Containers.GetArchiveFromContainerAsync("example", parameters, false);
using var memoryStream = new MemoryStream();
file.Stream.CopyTo(memoryStream);
file.Stream.Close();
memoryStream.Seek(0, SeekOrigin.Begin);
using var tarInput = new TarInputStream(memoryStream, Encoding.ASCII);
tarInput.GetNextEntry();
using var reader = new StreamReader(tarInput);
var content = reader.ReadToEnd();
Console.WriteLine(content);
</code></pre>
<p>I hope this helps!</p>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-38361504985751804912020-09-19T07:39:00.004+02:002020-09-19T20:33:36.837+02:00Burnout<p>
</p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMO4zKtuBaT5xgRz76lPIV73s9ZC_iv2gJEx39c5nvZGZZTmrPi7__ZaheG7BBgMB2tr8xpJIF6GMHDw69RB1LP0C_Qo8nK-cRt3HYKh-ww3GU8Z6URGWxEw_DDb4Y-zeXE1TsTyEFVAo/s858/match-wood-matches-red-sulfur-wallpaper.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="483" data-original-width="858" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMO4zKtuBaT5xgRz76lPIV73s9ZC_iv2gJEx39c5nvZGZZTmrPi7__ZaheG7BBgMB2tr8xpJIF6GMHDw69RB1LP0C_Qo8nK-cRt3HYKh-ww3GU8Z6URGWxEw_DDb4Y-zeXE1TsTyEFVAo/w640-h360/match-wood-matches-red-sulfur-wallpaper.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">image via <a href="https://www.peakpx.com" target="_blank">Peakpx</a><br /></td></tr></tbody></table> I
recently listened to an interesting <a href="https://hanselminutes.com/697/managing-the-burnout-burndown-with-dr-aneika-simmons" target="_blank">podcast</a> about burnout that stimulated some
thoughts regarding this silent killer that could easily get rampant, especially
in the software industry which is known to be very mentally demanding.
<p>This
industry attracts very passionate persons who -given an interesting enough
problem- will voluntarily give up a lot of their time, energy and other aspects
of their social and health lives.</p>
<p>While
seeking the satisfaction of solving complex problems or under tight delivery
pressure, developers "get into the zone" and spend extended hours
without even noticing.</p>
<p>Commonly,
developers take pride in this aspect of their work. Other developers consider
this as a role model for how a dedicated developer should be. Managers celebrate heroic efforts of their developers and even more take it for granted
and it become a normal expectation.</p>
<p>But
what's wrong with this? If the developer is really passionate about his/her
work, so what?</p><p>One of
the light bulb moments in this podcast is when Dr Aneika (PhD in Organizational Behavior and Human Resources) said:
</p><p> <span style="font-style: italic;">"…you would think that some research or previous
research said, well, maybe engagement is the antonym to burnout. But no, what
we really found out is that people that are really, really engaged are the ones
that are most susceptible to burnout"</span></p>
<p><span style="font-style: italic;">"…to be a great developer, to be a great
programmer, or to be a great coder, you have to really be involved. And that
involvement that takes you in and sucks you in could be the same thing that can
lead you down the road of burnout."</span></p>
<p></p>
<p>No
surprise then that developers could go through waves of extreme productivity
followed by low performance, if not conscious enough to how their mind and
emotions work.</p>
<p></p>
<p>Another
important aspect to consider especially if you're a leader in tech is the
impact of your burnout on how you interact with those who you lead.</p>
<p></p>
<p>One component of
burnout is depersonalization, that is when you're burnt out, you get detached
from the surrounding team members, and focus only on what you get out of them.
To you, they become more like functions with inputs and outputs, and your
relationship becomes merely transactional, which is very dangerous.</p>
<p></p>
<p>To me,
one of the most important leadership traits is empathy. When you're drained to
the extent that you have no emotional capacity for empathy, you lose the
ability to connect and support your team members. And especially if you're
normally understanding and supportive, your fluctuating behaviour might hurt
the trust you've earned.</p>
<p>Take care of the
signs of burnout. And remember not to deplete all your energy before taking the
time to recharge.</p>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-6610626392617643812020-01-04T10:42:00.003+02:002020-01-04T10:44:00.122+02:00Which language should I speak?<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<div>
<div>
<div>
<div>
Working
in a diverse environment with team members from many nationalities is a great
experience. You get to know new cultures and recognize how similar people are
across the world although the seemingly extreme differences.</div>
<div>
In such an environment, you hear different languages all
the time! And although there is usually a de facto business language, -English
in my case, since I'm currently working in Australia-, some people prefer to
have conversations in their native tongue
with colleagues that share the same language even in a business context.</div>
<div>
<br /></div>
<div>
Well,
is that OK?</div>
<div>
There
are many angles from which I see this matter.</div>
<div>
<br /></div>
<div>
<h3 style="text-align: left;">
It's
good to feel natural</h3>
</div>
<div>
As a
non-native English speaker myself, I feel very weird speaking with my Arabic
speaking colleagues -especially Egyptians- in a secondary language, it just
doesn't feel natural! Why speak in a language that we wouldn't normally use if
we were having a casual chat? Put aside losing access to a huge stock of
vocabulary and expressions that we share. This leads to the second point:</div>
<div>
<br /></div>
<div>
<h3 style="text-align: left;">
It's
about effective communication</h3>
</div>
<div>
We
need to get the job done, right? So why put a barrier in front of effective
communication? Undoubtedly using my native language makes conveying my thoughts
much easier. Besides, it gives better control over the tone of the
conversation. I suppose the same goes for other nationalities as well.</div>
<div>
<br /></div>
<div>
<h3 style="text-align: left;">
But
what are we missing?</h3>
</div>
<div>
Some
people might feel excluded when others around them speak in a language they
don't understand. However, I haven't seen this causing real issues.</div>
<div>
<br /></div>
<div>
<h3 style="text-align: left;">
A
virtual wall?</h3>
</div>
<div>
I've
been working in Agile teams for years. And I believe in the value of having
collocated teams in facilitating communication. </div>
<div>
It
happened many times that I overheard a discussion between other colleagues in
my team area when I jumped in and gave help to solve an issue, guided on a
topic, or threw in a piece of information that was necessary to solve a
problem. Even if you're not intentionally paying attention, it's possible to
save the team from consuming a lot of time going in circles.</div>
<div>
Speaking
in a different language defies the purpose of collocation and creates virtual
walls. It's the same reason why some Agile practitioners recommend not putting
headphones as they isolate the team member from the surrounding team
interactions.</div>
<div>
<br /></div>
<div>
What
about you? Do you prefer speaking in your first language if different from the
common one used at work? On the other side, how do you feel about other
colleagues speaking in a language that you don't understand?</div>
</div>
</div>
</div>
</div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-89613274801348277892019-04-26T08:40:00.004+02:002019-04-26T08:51:02.542+02:00Using Git hooks to alter commit messages<div dir="ltr" style="text-align: left;" trbidi="on">
As developers we try to get the repetitive boring stuff out of our ways. Hence we try to use tools that automate some of our workflows, or if no tools is available for our specific needs, no problem, we automate them ourselves, we're developers after all!<br />
<br />
In one of the projects I worked on, there was a convention to add the task id as part of each commit message because some tools are used to generate reports based on it. I'm not sure why this was required in that situation, but I had to follow the convention anyway. Since I tend to make many small commits every day, I was sure I'll forget to add the task id most of the time. So I started investigating Git hooks.<br />
<br />
Git provides many hooks that could be used to automate some of the repetitive behaviors that are required to happen with the different life cycle steps of Git usage. For example:<br />
• Pre-commit<br />
• Pre-push<br />
• Prepate-commit-message<br />
• Commit-message<br />
<br />
The folder ".git/hooks" within the git repository folder contains many sample commit hook files which are good starting points. The one of interest in this case was the <b>commit-msg</b> hook.<br />
<br />
In my scenario, we had a convention to name our branches using the patterns "feature/<task-id>" or "bug/<task-id>".<br /><br />So I decided to deduce the task id from the branch name and prepend it to the commit message.<br />I created a file with the name <b>commit-msg</b> in the <b>.git/hooks</b> folder, the code inside this file is similar to:
</task-id></task-id><br />
<pre><code class="line-numbers language-bash">#!/bin/sh
message=$(cat $1)
branch=$(git branch | grep \* | cut -d ' ' -f2-)
task=$(echo $branch | cut -d / -f2-)
echo "$task - $message" > $1</code></pre>
<ul style="text-align: left;">
<li>Line 2: reads the original commit message from the temp file, whose name is passed as the first parameter to the script.</li>
<li>Line 3: reads the current branch name. Thanks to <a href="https://stackoverflow.com/questions/6245570/how-to-get-the-current-branch-name-in-git/11868440" target="_blank">StackOverflow</a>.</li>
<li>Line 4: extracts the task id from the branch name by splitting the string by the "/" character and taking the second part.</li>
<li>Line 5: overwrites the commit message with the required format.</li>
</ul>
<br />
Now when I commit code using:<br />
<pre><code class="language-bash">git commit -m"test message"</code></pre>
And then inspect the logs using git log command, the commit message is modified as needed:
<br />
<pre data-line="5"><code class="language-bash">commit f1fe8918c754ca89649a2a86ef4ab0a9a53c0496 (HEAD -> feature/1234)
Author: Hesham A. Amin
Date: Fri Apr 26 08:24:40 2019 +0200
1234 - test message
commit 4e3e180d3a27772a32230bf6dbbd039b949dc30e
...</code></pre>
<br />
Investing few minutes to automate daunting repetitive tasks pays off on the long term.</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-20084544433045192202018-12-27T14:30:00.000+02:002018-12-27T14:30:10.218+02:00Removing the Server header from Kestrel hosted ASP.NET core apps<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
In the continuous battle of software builders against attackers, the less information the application discloses about its infrastructure the better.<br />
One of the issues I've repetitively seen in penetration testing reports for web applications is the existence of the Server header, which as mentioned in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Server" target="_blank">MDN</a>:<br />
<br />
<q>
The Server header contains information about the software used by the origin server to handle the request.</q><br />
<br />
Also as mentioned by MDN:<br />
<br />
<q>
Overly long and detailed Server values should be avoided as they potentially reveal internal implementation details that might make it (slightly) easier for attackers to find and exploit known security holes. </q>
<br />
<br />
By default, when using Kestrel web server to host an ASP.NET core application, Kestrel returns the Server header with the value Kestrel as shown in this screenshot from Postman:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOQlJpoJ3opcDWw3Yr_OGSg1aP7RV4PBtZmE8MURo3PL-XI2_GS9rl1Z1F_Hrzf6ZVROCzaxaWdajCxGo1QQjXJii-3JYk6v_S5KZlZ9cpk2GsHoGvN5EEhxCPyazYKkdxF85sD9HowUE/s1600/kestrel.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="117" data-original-width="348" height="133" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOQlJpoJ3opcDWw3Yr_OGSg1aP7RV4PBtZmE8MURo3PL-XI2_GS9rl1Z1F_Hrzf6ZVROCzaxaWdajCxGo1QQjXJii-3JYk6v_S5KZlZ9cpk2GsHoGvN5EEhxCPyazYKkdxF85sD9HowUE/s400/kestrel.PNG" width="400" /></a></div>
Even though it doesn't sound like a big security risk, I just prefer to remove this header. This could be achieved by adding this line to the <b>ConfigureServices</b> method in the application <b>Startup</b> class:</div>
<pre><code class="language-csharp">services.PostConfigure<kestrelserveroptions>(k => k.AddServerHeader = false);</kestrelserveroptions></code></pre>
<br />
The <b>PostConfigure</b> configurations <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options" target="_blank">run after all</a> <b>Configure<t></t></b> methods. So it's a good place to override the default behavior.<br />
<b></b></div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-69427807392067374782017-09-24T13:50:00.002+02:002017-09-24T13:54:41.641+02:00Azure Event Grid WebHooks - Retries (Part 3)<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
Building distributed systems is challenging. If not carefully designed and implemented, a failure in one component can cause cascading failures that affect the whole system. That's why patterns like <a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/retry" target="_blank">Retry</a> and <a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker" target="_blank">Circuit Breaker</a> should be considered to improve system resilience. In case of sending WebHooks the situation might be even worse as your system is calling a totally external system with no availability guarantees and over the internet which is less reliable than your internal network.<br />
Continuing on the previous parts of this series (<a href="http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html" target="_blank">Part 1</a>, <a href="http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-filtering.html" target="_blank">Part 2</a>) I'll show how to use Azure Event Grid to overcome this challenge.<br />
<br />
<h3 style="text-align: left;">
Azure Event Grid Retry Policy</h3>
Azure Event Grid provides a built-in capability to retry failed requests with exponential backoff, which means that in case the WebHook request fails, it will be retried with increased delays.<br />
As per the <a href="https://docs.microsoft.com/en-us/azure/event-grid/delivery-and-retry" target="_blank">documentation</a> failed requests will be retried after 10 seconds, and if the request fails again, it will keep retrying after 30 seconds, 1 minute, 5 minutes, 10 minutes, 30 minutes, and 1 hour. However these numbers aren't exact intervals as Azure Event Grid adds some randomization to these intervals.<br />
Events that take more than 2 hours to be delivered will be expired. This duration should be increased to 24 hours after the preview phase.<br />
This behavior is not trivial to implement which adds to the reasons why using a service like Azure Event Grid should be considered as an alternative to implementing it's capabilities from scratch.<br />
<br />
<h3 style="text-align: left;">
Testing Azure Event Grid Retry</h3>
To try this capability and building on the example used in <a href="http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html" target="_blank">Part 1</a>, I made a change to the AWS Lambda function that receives the WebHook to introduce random failures:<br />
<br /></div>
<pre class="line-numbers" data-line="9-15"><code class="language-csharp">public object Handle(Event[] request)
{
Event data = request[0];
if(data.Data.validationCode!=null)
{
return new {validationResponse = data.Data.validationCode};
}
var random = new Random(Guid.NewGuid().GetHashCode());
var value = random.Next(1 ,11);
if(value > 5)
{
throw new Exception("Failure!");
}
return "";
}
</code></pre>
<br />
Lines 9-15 produce almost 50% failure rate. When I pushed an event (as shown in the previous posts) to a 1000 WebHook subscribers, the result was the below chart depicting the number of API calls per minute and number of 500 errors per minute:<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4vzmbN6Wv8ikoq3CyGPYBUAUHYBzPAHT4ka98gUAFWCapoJRRc6YkgrmI2vIg0sGJ4TWRP2H9eHqYtCQZ2cSFH7TkAY2zQboCjSxOiax8lADhYcFgrr8LAc1AJdPAOX22WnX5maxo32I/s1600/Azure-Event-Grid-Retry.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="421" data-original-width="1600" height="168" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4vzmbN6Wv8ikoq3CyGPYBUAUHYBzPAHT4ka98gUAFWCapoJRRc6YkgrmI2vIg0sGJ4TWRP2H9eHqYtCQZ2cSFH7TkAY2zQboCjSxOiax8lADhYcFgrr8LAc1AJdPAOX22WnX5maxo32I/s640/Azure-Event-Grid-Retry.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Number of requests per minute (Blue) - Number of 500 Errors per minute (Orange)</td></tr>
</tbody></table>
<br />
We can observe the following:<br />
<ul style="text-align: left;">
<li>The number of errors (orange) is almost half the number of requests (blue)</li>
<li>Number of requests per minute is around 1500 for the first minute. My explanation is that since we have 1000 listeners and 50% failure rate, Azure has made extra 500 requests.</li>
<li>After a bit less than 2 hours (not shown in the chart for size constraints) the number of errors has dropped to 5 and no more requests were made. This is due to the expiration period during the preview.</li>
</ul>
<br />
<h3 style="text-align: left;">
Summary</h3>
Azure Event Grid is a scalable and resilient service that can be used in case of handling thousands (maybe more) of WebHook receivers. Whether your solution is hosted on premises or on Azure, you can use this service to offload a lot of work and effort.<br />
I wish that Azure Event Grid could give some insights on how events are pushed and received which would help a lot in troubleshooting as the subscriber is usually not under your control. I hope this will become an integrated part of the Azure portal.<br />
It's worth mentioning that other cloud providers support similar functionality as Event Grid that are worth checking, specifically <a href="https://aws.amazon.com/sns/" target="_blank">Amazon Simple Notification Service (SNS)</a> and <a href="https://cloud.google.com/pubsub/docs/overview" target="_blank">Google Cloud Pub/Sub</a>. Both have overlapping functionality with Azure Event Grid.<br />
<br /></div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com1tag:blogger.com,1999:blog-2496415891665263000.post-54975250615653420222017-08-27T04:27:00.002+02:002017-08-27T04:41:48.036+02:00Azure Event Grid WebHooks - Filtering (Part 2)<div dir="ltr" style="text-align: left;" trbidi="on">
I my <a href="http://blog.heshamamin.com/2017/08/azure-event-grid-webhooks-part-1.html" target="_blank">previous post</a> I introduced <a href="https://docs.microsoft.com/en-us/azure/event-grid/" target="_blank">Azure Event Grid</a>. I demonstrated how simple it is to use Event Grid to push hundreds of events to subscribers using WebHooks.<br />
In today's post I'll show a powerful capability of Event Grid which is filters.<br />
<br />
<h3 style="text-align: left;">
What are Filters?</h3>
Subscribing to a topic means that all events pushed to this topic will be pushed to the subscriber. But what if the subscriber is interested only in a subset of the events? For example in my previous post I created a blog topic and all subscribers to this topic will receive notifications about new and updated blog posts, new comments, etc. But some subscribers might be interested only in posts and want to ignore comments. Instead of creating multiple topics for each type of event which will required separate subscriptions, Event Grid has the concept of filters. Filters are applied on the event content of and events will only be pushed to subscribers with matching filters.<br />
The below diagram demonstrates this capability:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFVKDHsncmGsZo8x6K9KHtNwSfg3_v08mwkLsrqd8RgEnXWe_n3bdg9SrB5ZcidtMELn9jnXogO1Ss8kICFI2gm_eYqqhXJ2EnzY3_CYbJ418V9LJOR6jBzyTeMujvevw0G1jovF_QgzA/s1600/event-grid-topic-filters.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="680" data-original-width="1140" height="380" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFVKDHsncmGsZo8x6K9KHtNwSfg3_v08mwkLsrqd8RgEnXWe_n3bdg9SrB5ZcidtMELn9jnXogO1Ss8kICFI2gm_eYqqhXJ2EnzY3_CYbJ418V9LJOR6jBzyTeMujvevw0G1jovF_QgzA/s640/event-grid-topic-filters.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Filtering based on Subject prefix/suffix</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Azure Event Grid supports two types of filters:</div>
<ul style="text-align: left;">
<li>Subject prefix and suffix filters.</li>
<li>Event type filters. </li>
</ul>
<h3 class="separator" style="clear: both; text-align: left;">
Subject prefix and suffix filters </h3>
<h4 class="separator" style="clear: both; text-align: left;">
</h4>
<div class="separator" style="clear: both; text-align: left;">
In this example I'll use a prefix filter to receive only events with subject starting with "post" using the <b>--subject-begins-with post</b> parameter.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<pre><code class="language-powershell">az eventgrid topic event-subscription create --name postsreceiver --subject-begins-with post --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/post -g rg --topic-name blog</code></pre>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Similarly:</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<pre><code class="language-powershell">az eventgrid topic event-subscription create --name commenstreceiver --subject-begins-with comment --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/comment -g rg --topic-name blog</code></pre>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
An event that looks like:</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<pre><code class="language-json">[
{
"id": "2134",
"eventType": "new",
"subject": "comments",
"eventTime": "2017-08-20T23:14:22+1000",
"data":{
"content": "Azure Event Grid",
"postId": "123"
}
}
]
</code></pre>
<br />
<div class="separator" style="clear: both; text-align: left;">
Will only be pushed to the second subscriber because it matches the filter.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<h3 class="separator" style="clear: both; text-align: left;">
Filtering based on event type</h3>
<div class="separator" style="clear: both; text-align: left;">
Another way for the subscriber to filter the pushed message is specifying event types. By default when a new subscription is added the subscriber filter data looks like</div>
<pre><code class="language-json">"filter": {
"includedEventTypes": [
"All"
],
"isSubjectCaseSensitive": null,
"subjectBeginsWith": "",
"subjectEndsWith": ""
</code></pre>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The <b>includedEventTypes </b>attribute<b> </b>equals to "All" which means that the subscriber will get all events regardless the type.</div>
<div class="separator" style="clear: both; text-align: left;">
You can filter on multiple event types as space separated values using the <b>--included-event-types</b> parameter: </div>
<pre><code class="language-powershell">az eventgrid topic event-subscription create --name newupdatedreceiver --included-event-types new updated --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/newupdated -g rg --topic-name blog
</code></pre>
<br />
<div class="separator" style="clear: both; text-align: left;">
which results in:</div>
<pre><code class="language-json"> "filter": {
"includedEventTypes": [
"new",
"updated"
],
"isSubjectCaseSensitive": null,
"subjectBeginsWith": "",
"subjectEndsWith": ""
</code></pre>
<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: left;">
Which means that only events with type "new" or "updated" will be pushed to this subscriber. This event won't be pushed:</div>
<pre><code class="language-json">[
{
"id": "123456",
"eventType": "deleted",
"subject": "posts",
"eventTime": "2017-08-20T23:14:22+1000",
"data":{
"postId": "123"
}
}
]
</code></pre>
<br />
<h3 class="separator" style="clear: both; text-align: left;">
Summary</h3>
Enabling the subscriber to have control on which events it will receive based on subject prefix, suffix, or event type (and a mix of these options) is a powerful capability of Azure Event Grid. Routing events in a declarative way without writing any logic on the event source side significantly simplifies this scenario.</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com1tag:blogger.com,1999:blog-2496415891665263000.post-42301477333848732112017-08-22T23:20:00.000+02:002017-09-21T12:56:35.657+02:00Azure Event Grid WebHooks (Part 1)<div dir="ltr" style="text-align: left;" trbidi="on">
Few days ago, Microsoft <a href="https://azure.microsoft.com/en-us/blog/introducing-azure-event-grid-an-event-service-for-modern-applications/" target="_blank">announced</a> the new <a href="https://docs.microsoft.com/en-us/azure/event-grid/" target="_blank">Event Grid</a> service. The service is described as:<br />
"<i>... a fully-managed intelligent event routing service
that allows for uniform event consumption using a publish-subscribe
model.</i>"<br />
Although not directly related, I see this service as a complement to the serverless offerings provided by Microsoft after Azure Functions and Logic Apps.<br />
<br />
Event Grid has many capabilities and scenarios. In brief , it's a service that is capable of listening to multiple event sources using topics and publishing them to subscribers or handlers that are interested in these events.<br />
Event sources can be Blob storage events, Event hub events, custom events, etc. And subscribers can be Azure functions, logic apps, WebHooks.<br />
In this post I'll focus on pushing WebHooks in a scalable, reliable, pay as you go, and easy manner using Event Grid.<br />
<br />
<h3 style="text-align: left;">
Topics, and WebHooks</h3>
<div style="text-align: left;">
Topics are a way to categorize events. A publisher defines topics and sends specific events to these topics. Publishers can subscribe to topics to listen and respond to events published by event sources.</div>
The concept of WebHooks is not new. WebHooks are HTTP callbacks that respond to events that were originated in other systems. For example you can create HTTP endpoints that listen to WebHooks published by GitHub when code is pushed to a specific repository. This creates an almost endless number of integration possibilities.<br />
In this post we'll simulate a blogging engine that pushes events when new posts are published. And we'll create a subscriber that listens to these events.<br />
<br />
<h3 style="text-align: left;">
Creating a topic</h3>
<div style="text-align: left;">
The first step to publishing a custom event is to create a topic. As other Azure resources, Event Grid topics are created in resource groups. To create a new resource group named "rg" we can execute this command using <a href="https://docs.microsoft.com/en-us/cli/azure/overview" target="_blank">Azure CLI v2.0.</a></div>
<div style="text-align: left;">
</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-powershell">az group create --name rg --location westus2</code></pre></div>
<div style="text-align: left;">
I Chose westus2 region because currently Event Grid has limited region availability. But this changes <a href="https://azure.microsoft.com/en-us/regions/services/" target="_blank">all the time</a>.</div>
<div style="text-align: left;">
The next step is to create a topic in the resource group. We'll name our topic "blog":</div>
<div style="text-align: left;">
<pre><code class="language-powershell">az eventgrid topic create --name blog -l westus2 -g rg</code></pre></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
When you run the above command, the response should look like:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-json">{
"endpoint": "https://blog.westus2-1.eventgrid.azure.net/api/events",
"id": "/subscriptions/5f1ef4e8-6358-4a75-b171-58904114fb57/resourceGroups/rg/providers/Microsoft.EventGrid/topics/blog",
"location": "westus2",
"name": "blog2",
"provisioningState": "Succeeded",
"resourceGroup": "rg",
"tags": null,
"type": "Microsoft.EventGrid/topics"
}</code></pre></div>
<div style="text-align: left;">
Observe the endpoint attribute. Now we have the URL to be used to to push events: <i>https://blog.westus2-1.eventgrid.azure.net/api/events</i>.</div>
<div style="text-align: left;">
<br />
<br /></div>
<h3 style="text-align: left;">
Subscribing to a topic</h3>
To show the capabilities of the Event Grid, I need to create hundreds of subscribers. You can create your subscribers in any HTTP capable framework. I chose to use AWS Lambda functions + API Gateway hosted in Sydney region. This proves that there is no Azure magic by any means. Just pure HTTP WebHooks sent from Azures data centers in west US to AWS data centers in Sydney.<br />
The details of creating Lambda functions and exposing them using API Gateway aren't relevant to this post, the important thing is to understand that I have an endpoint that listens to HTTP requests on: <i>https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/{id}</i> and forwards them to AWS Lambda implemented in C#.<br />
The command to create a subscription looks like:<br />
<br />
<pre><code class="language-powershell">az eventgrid topic event-subscription create --name blogreceiver --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/ -g rg --topic-name blog </code></pre><br />
I created 100 subscriptions using this simple Powershell script:<br />
<br />
<pre><code class="language-powershell">while($val -ne 100) { $val++ ; az eventgrid topic event-subscription create --name blogreceiver$val --endpoint https://twzm3c5ry2.execute-api.ap-southeast-2.amazonaws.com/prod/$val -g rg --topic-name blog}</code></pre><br />
An important thing to notice which is the security implications of this model. If I was able to specify any URL as a subscriber to my topic, I'd be able to use Azure Event Grid as a DDoS attacking tool. That's why subscription verification is very important.<br />
<br />
<h3 style="text-align: left;">
Subscription verification</h3>
<div style="text-align: left;">
To verify that the subscription endpoint is a real URL and is really willing to subscribe to the topic, a verification request is sent to the subscription endpoint when the subscription is created. This request looks like:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-json">[
{
"Id": "dbb80f11-6fbb-4fc3-9c1f-034f00da3b5f",
"Topic": "/subscriptions/5f1ef4e8-6358-4a75-b171-58904114fb57/resourceGroups/rg/providers/microsoft.eventgrid/topics/blog",
"Subject": "",
"Data": {
"validationCode": "4fc3f59c-2d03-41f4-b466-da65a81f8ba5"
},
"EventType": "Microsoft.EventGrid/SubscriptionValidationEvent",
"EventTime": "2017-08-20T11:11:00.0101361Z"
}
]</code></pre></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The validationCode attribute has a unique key to identify the subscription request. The endpoint should respond to the verification request with the same code:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-json">{"validationResponse":"3158cb2f-a2c4-46ca-96b0-ae2c8562fa43"}</code></pre></div>
<div style="text-align: left;">
<br /></div>
<h3 style="text-align: left;">
The subscriber </h3>
<div style="text-align: left;">
The subscriber is very simple. It checks whether the request has a validation code. If so, it responds with the validation response. Otherwise it just returns 200 or 202. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-csharp">
public class Event
{
public Data Data { get; set; }
}
public class Data
{
public string validationCode { get; set; }
}
public class Receiver
{
public object Handle(Event[] request)
{
Event data = request[0];
if(data.Data.validationCode!=null)
{
return new {validationResponse = data.Data.validationCode};
}
return "";
}
}
</code></pre></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Note that the AWS API Gateway is responsible for setting the status code to 200.</div>
<div style="text-align: left;">
<br /></div>
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
Pushing events</h3>
<div style="text-align: left;">
As I showed above, I created 100 subscribers. Now it's time to start pushing events which is a simple post request but of course this request must be authenticated. The authentication methods supported are Shared Access Signature "SAS" and keys. I'll use the latter for simplicity.</div>
<div style="text-align: left;">
To retrieve the key, you can use the management portal or this command:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-powershell">az eventgrid topic key list --name blog --resource-group rg</code></pre></div>
<div style="text-align: left;">
To configure my .net core console application that will push the events, I created 2 environment variables using Powershell:</div>
<div style="text-align: left;">
<pre><code class="language-powershell">$env:EventGrid:EndPoint = "https://blog.westus2-1.eventgrid.azure.net/api/events"
$env:EventGrid:Key = "HQI2Ff7MoqlV8RFc/U........."</code></pre></div>
<div style="text-align: left;">
I created a class to read the configuration variables into an instance of it:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-csharp">class EventGridConfig
{
public string EndPoint { get; set; }
public string Key { get; set; }
}</code></pre></div>
<div style="text-align: left;">
The rest is simple. Reading the configuration variables, and posting an event to the endpoint.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<pre><code class="language-csharp">Configuration = builder.Build();
var config = new EventGridConfig();
Configuration.GetSection("EventGrid").Bind(config);
var http = new HttpClient();
string content = @"
[
{
""id"": ""123"",
""eventType"": ""NewPost"",
""subject"": ""blog/posts"",
""eventTime"": ""2017-08-20T23:14:22+1000"",
""data"":{
""title"": ""Azure Event Grid"",
""author"": ""Hesham A. Amin""
}
}
]";
http.DefaultRequestHeaders.Add("aeg-sas-key", config.Key);
var result = http.PostAsync(config.EndPoint, new StringContent(content)).Result;
</code></pre></div>
<div style="text-align: left;">
Now it's Azure's Event Grid turn to push this event to the 100 subscribers.</div>
<br />
<h3 style="text-align: left;">
The result</h3>
Running the above console application sends a request to Azure Event Hub. In turn in sends the event to the 100 subscribers I've created.<br />
To see the result. I use AWS API Gateway CloudWatch graphs which show the number of requests to my endpoint. I ran the application few times and the result was this graph:<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAGsz7XLBUX_mclssnzYKqra7mB1B1x5yiFiCICL2ViTlK5JCRqRFdC4NmIWnkyO8jxzCeWnl32vMaFZi8vsOFCtw1jrjwIGm8cpoGfwmBC5wVxzCYajtkC6ywecjP9rBES_EIDOreLkw/s1600/EventHub-CloudWatch.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="301" data-original-width="1600" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAGsz7XLBUX_mclssnzYKqra7mB1B1x5yiFiCICL2ViTlK5JCRqRFdC4NmIWnkyO8jxzCeWnl32vMaFZi8vsOFCtw1jrjwIGm8cpoGfwmBC5wVxzCYajtkC6ywecjP9rBES_EIDOreLkw/s640/EventHub-CloudWatch.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Requests per minute</td></tr>
</tbody></table>
<br />
<br />
<h3 style="text-align: left;">
Summary</h3>
In this post I've shown how to use Azure Event Grid to push WebHooks to HTTP endpoints and how to subscribe to these WebHooks.<br />
In next posts I'll explore more capabilities of Azure Event Grid.<br />
<br /></div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-72388723948828175142017-08-17T23:27:00.004+02:002017-08-17T23:27:56.170+02:00My AWS IaaS playlist for Arabic speakers<div dir="ltr" style="text-align: left;" trbidi="on">
If you're an Arabic speaker and interested in learning about AWS IaaS, check my <a href="https://www.youtube.com/playlist?list=PLIv0fHmhJRMJlzDiAWjaaZrFCvzqrhZIi" target="_blank">AWS IaaS [Arabic]</a> Youtube playlist. In this series of videos I go step by step creating a scalable, secure web application using AWS infrastructure as a service offering.<br />
I'm following a problem-solution approach. I start with a very basic but functional solution, I identify the challenges the solution has, then I move to the next step in a logical progression towards achieving the end goal.<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/videoseries?list=PLIv0fHmhJRMJlzDiAWjaaZrFCvzqrhZIi" width="560"></iframe><br />
<br />
And if you have no idea what capabilities AWS has, you can check my <a href="https://www.youtube.com/watch?v=QQ7gmr6RPlI&t=60s" target="_blank">introductory video</a>. It's a bit dated but still relevant.</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-5936763945254170492017-07-23T11:39:00.001+02:002017-07-23T11:39:52.294+02:00My talk at DDDSydney 2017<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV9jPJhpBY0_8Pg101LPgkft_GBMh34Y5BSmuHwpEtC1KPl1PgeqTaYWXiV9btgUue0b1LdU3JGmvHyqBvlkmaMDNf6TauHdtsLkdTO_zJHyDkB2R3ASeI6Gut1t5McZot-y_30MNahCI/s1600/dddsydney.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="971" data-original-width="975" height="316" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV9jPJhpBY0_8Pg101LPgkft_GBMh34Y5BSmuHwpEtC1KPl1PgeqTaYWXiV9btgUue0b1LdU3JGmvHyqBvlkmaMDNf6TauHdtsLkdTO_zJHyDkB2R3ASeI6Gut1t5McZot-y_30MNahCI/s320/dddsydney.png" width="320" /></a>It was very excising to attend and speak at <a href="http://2017.dddsydney.com.au/" target="_blank">DDDSydney 2017</a>. a lot of interesting topics have been presented and the organizers have done a good job classifying the sessions into tracks that one can follow to get a complete picture about a certain area of interest. For example my session "<i>Avoiding death by a thousand containers. Kubernetes to the rescue!</i>" was the last in a track that had sessions about microservices and docker. That made it a logical conclusion on how to host containerized microservices in a highly available and easy to manage environment.<br />
<br />
In my demos I used AWS. This choice was intentional since AWS doesn't support Kubernetes out of the box as both Google Container Engine (GKE) and Azure Container Service (ACS) do. I wanted to show that Kubernetes could be deployed to other environments as well. Thanks to Kops (Kubernetes Operations) which made it relatively easy to deploy the Kubernetes cluster on AWS.<br />
I this session I showed how to expose services using an external load balancer and how deployments make it easy to declare the desired state of the Pods deployed to Kubernetes. I also demonstrated the very powerful concept of Labels and Selectors which is a loosely coupled way to connect services to the Pods that contain the service logic.<br />
<br />
<br /></div>
<script src="https://gist.github.com/heshamamin/4ba2d8c0781eb909e59a14b0bb7522c1.js"></script>
I Also demonstrated how easy it is to perform an updated to the deployment by switching from Nginx to Apache (httpd).</div>
In another demo I wanted to demonstrate how to connect services inside the cluster. I made a simple .net core web application that counts the number of hits each frontend gets. The hit count is stored in a Redis instance that's exposed through a service.<br />
<br />
<script src="https://gist.github.com/heshamamin/2e499261fb7e7c16c05855379b83584e.js"></script>
<br />
The interesting part is how the web application determines the address of the Redis instance. As the docker image should be immutable once created, configurations should be <a href="https://12factor.net/config" target="_blank">stored in the environment.</a></div>
<br />
<script src="https://gist.github.com/heshamamin/bc2ecf3784e4292a4d59a4056d5e84bc.js"></script>
As in the above code snippet, the environment variable REDIS_SERVICE_HOST is used to get the address of the Redis service. This environment variable is automatically populated by Kubernetes since the Redis service is created before the web application deployment. Otherwise DNS service discovery could be used.
I used a simple script to hit the web API and the result was. I also manually deleted Pods that host the web API and thanks to Kubernetes' desired state magic it kept creating new instances automatically. And that was the result of hitting the service:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMhi4U7sFYZRQLi1j8cyWUUAk4LDr3i5ZqaXMBVE_7XeIxu3Zqmm6YUKyD4gVm8NDZ0UPgzm9hiI3jpl7C1VhlMnrdU13BIlYph-7XgO0QfIkzRD2BvVF0xX5b02IvQZp0J-SanuW40mQ/s1600/KubeVote.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="912" data-original-width="1600" height="363" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMhi4U7sFYZRQLi1j8cyWUUAk4LDr3i5ZqaXMBVE_7XeIxu3Zqmm6YUKyD4gVm8NDZ0UPgzm9hiI3jpl7C1VhlMnrdU13BIlYph-7XgO0QfIkzRD2BvVF0xX5b02IvQZp0J-SanuW40mQ/s640/KubeVote.gif" width="640" /></a></div>
<br />
Requests go through AWS load balancing to Kubernetes nodes. The service passes the requests to Pods hosting the API.<br />
<br />
Kubernetes is one of the fast moving open source projects and I think the greatest thing about it is the community and wide support. So if you're planning to host containerized workloads, give it a try!<br />
<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="485" marginheight="0" marginwidth="0" scrolling="no" src="//www.slideshare.net/slideshow/embed_code/key/sG9q0DHttwWXUf" style="border-width: 1px; border: 1px solid #ccc; margin-bottom: 5px; max-width: 100%;" width="595"> </iframe> <br />
<div style="margin-bottom: 5px;">
<b> <a href="https://www.slideshare.net/HeshamAmin/kubernetes-talk-at-dddsydney-2017" target="_blank" title="Kubernetes talk at DDDSydney 2017">Kubernetes talk at DDDSydney 2017</a> </b> from <b><a href="https://www.slideshare.net/HeshamAmin" target="_blank">Hesham Amin</a></b> </div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-78381944893892714112017-05-20T03:35:00.004+02:002017-05-20T03:35:44.192+02:00Detecting applications causing SQL Server locks<div dir="ltr" style="text-align: left;" trbidi="on">
On one of our testing environments, login attempts to a legacy web application that uses MS SQL Server were timing out and failing. I suspected that the reason might be that another process is locking one of the table needed in the login process.<br />
I ran a query similar to this:<br />
<br />
<pre><code>SELECT request_mode,
request_type,
request_status,
request_session_id,
resource_type,
resource_associated_entity_id,
CASE resource_associated_entity_id
WHEN 0 THEN ''
ELSE OBJECT_NAME(resource_associated_entity_id)
END AS Name,
host_name,
host_process_id,
client_interface_name,
program_name,
login_name
FROM sys.dm_tran_locks
JOIN sys.dm_exec_sessions
ON sys.dm_tran_locks.request_session_id = sys.dm_exec_sessions.session_id
WHERE resource_database_id = DB_ID('AdventureWorks2014')
</code></pre>
<br />
Which produces a result similar to:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmaI7MzPukA44Efdi57WQzbpKugXJQzIZ59PV9FqPYZY7B9w8MQMlZoHPh1v1FKVe9-mOgvod5aQcbnCPOqcCvTx8oq0d_RGhCzEJ_oguZzn11jVPIHl598Uv6PZm_XEEzBHl2Lve3sVs/s1600/lock.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="60" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmaI7MzPukA44Efdi57WQzbpKugXJQzIZ59PV9FqPYZY7B9w8MQMlZoHPh1v1FKVe9-mOgvod5aQcbnCPOqcCvTx8oq0d_RGhCzEJ_oguZzn11jVPIHl598Uv6PZm_XEEzBHl2Lve3sVs/s640/lock.PNG" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
It shows that an application is granted exclusive lock on the table EmailAddress, and another query is waiting for a shared lock to read from the table. But who is holding this lock?
In my case, by checking the client_interface_name and program_name columns from the result we could identify that a long running VBScript import job was locking the table.
I created a simple application that simulates a similar condition which you can check on <a href="https://github.com/heshamamin/blog/tree/master/DbLock" target="_blank">Github</a>. You can run the application and run the query to see the results.<br />
<br />
It's a good practice to include "Application Name" property in your connection strings (as in the provided application source code) to make diagnosing this kind of errors easier.
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com1tag:blogger.com,1999:blog-2496415891665263000.post-28644095158816647172017-02-18T06:25:00.001+02:002017-02-18T06:25:49.318+02:00Abuse of Story Points<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
Relative estimates are usually recommended in Agile teams. However nothing mandates a specific
sizing units like story points or T-shirt sizing. I believe that - used
correctly - relative estimation is a powerful and flexible tool.</div>
<div>
I usually prefer T-shirt sizing for road-mapping to determine which features will be included in
which releases. When epics are too large and subject to may changes, it makes
sense to use an estimation technique that is quick and fun and doesn't give a false
indication of accuracy.</div>
<div>
On the release level, estimating backlog items using story points helps planning and creating
a shared understanding between all team members. However used incorrectly, the
team can get really frustrated and might try to avoid story points in favor of
another estimation technique.</div>
<div>
<br />
In a team I'm working with, one of the team members suggested during a sprint retrospective
to change the estimation technique from story points to T-shirt sizing. The
reasons were:</div>
<ul>
<li>Velocity (measured by story points achieved in a sprint) are sometimes used to compare the performance of different teams.</li>
<li>Story points are used as a tool to force the team to do a specific amount of work during a sprint.</li>
</ul>
<div>
<div>
Both reasons make a good case against the use of story points. </div>
<div>
<br /></div>
<div>
The first one clearly contradicts with the relative nature of story points as each team has different capacity and baseline for their estimates. Also the fact that some
teams use velocity as a primary success metric is a sign of a <a href="http://ronjeffries.com/articles/016-03/you-want/" target="_blank">crappy agile implementation</a>.</div>
<div>
The second point is also a bad indicator. The reason is that you simply get what you ask for: If
the PO/SM/Manager wants higher velocity then inflated estimates is what (s)he gets. Quite similar to the <a href="https://en.wikipedia.org/wiki/Observer_effect" target="_blank">Observer effect</a>.</div>
<div>
<br /></div>
<div>
Fortunately in our case both of these concerns were based on observations from other teams. Both the Product Owner and Scrum Master were knowledgeable enough to avoid these
pitfalls and they explained how our team is using velocity just as a planning tool. However, the fact that some team members might get affected by the surrounding atmosphere in the organization is interesting and brings into attention the importance of having
consistent level of maturity and education.</div>
<div>
<br /></div>
<div>
What is your experience with using story points or any other estimation technique? What worked for you and what didn’t? Share your thoughts in a comment below.</div>
</div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-90307248810017320602016-11-09T22:42:00.000+02:002016-11-09T22:42:33.742+02:00Nano Server on AWS: Step by Step<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div>
Windows server 2016 comes in many flavors. Nano server is the new addition that is optimized to be lightweight and with smaller attack surface. It has much less memory and disk footprint and much faster boot time than Windows Core and the full windows server. These characteristics make Nano a perfect OS for the cloud and similar scenarios.<br />
However, being a headless (no GUI) OS means that no RDP connection can be made to administer the server. Also since only the very core bits are included by default means that configuring the server features is a different story than what we have in the full windows server.<br />
In this post I'll explain how to launch and connect to a Nano instance on AWS. And then use the package management features to install IIS.<br />
<br />
<h3 style="text-align: left;">
Launching an EC2 Nano server instance:</h3>
<ul style="text-align: left;">
<li>In the AWS console under the EC2 section, click "Launch Instance"</li>
<li>Select the "Microsoft Windows Server 2016 Base Nano" AMI.</li>
</ul>
</div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_cP6QNyA0rqdA56x-xgVPQZIv1fXlBEZWRreEzU0-8N8P1FoOtA9dam55-2250q76QyhCOMwv4VVxfe1BSJx4ElCnIG9M_ku3lFQNh18TWUhSbHWFFvM5rNBDPjC693W1v3FqGVUgmzA/s1600/Choose+an+Amazon+Machine+Image+%2528AMI%2529.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="130" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_cP6QNyA0rqdA56x-xgVPQZIv1fXlBEZWRreEzU0-8N8P1FoOtA9dam55-2250q76QyhCOMwv4VVxfe1BSJx4ElCnIG9M_ku3lFQNh18TWUhSbHWFFvM5rNBDPjC693W1v3FqGVUgmzA/s640/Choose+an+Amazon+Machine+Image+%2528AMI%2529.PNG" width="640" /></a></div>
<div>
<ul style="text-align: left;">
<li>In the "Choose an Instance Type" page, select "t2.nano" instance type. This instance type has 0.5GB of RAM. Yes! this will be more than enough for this experiment.</li>
<li>Use the default VPC and use the default 8GB storage.</li>
<li style="margin-bottom: 0px; margin-top: 0px; vertical-align: middle;">In the "<span class="gwt-InlineLabel KX">Configure Security Group" page things will start to be a bit different from the usual full windows server. Create a new security group and select these two inbound rules:</span><span style="font-family: "calibri"; font-size: 11.0pt;"> </span></li>
<ul>
<li style="margin-bottom: 0px; margin-top: 0px; vertical-align: middle;">WinRM-HTTP: Port 5985. This will be used for the remote administration.</li>
<li style="margin-bottom: 0px; margin-top: 0px; vertical-align: middle;">HTTP: Port 80. To test IIS from our local browser.</li>
</ul>
</ul>
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5b_IcQC8plb7aVKiqtt5mm2B5pb6waAQHpmzoVAlztfpAvlBVseGrZzSc1UpRUyq_Wlt5a5mGFY1pBkCKZjRBuqhJ8vwXViRk2w4kEcdhPL7sElx0oKQt84h8Rz7PYE-U6tN8r5XPd0A/s1600/Configure+Security+Group.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5b_IcQC8plb7aVKiqtt5mm2B5pb6waAQHpmzoVAlztfpAvlBVseGrZzSc1UpRUyq_Wlt5a5mGFY1pBkCKZjRBuqhJ8vwXViRk2w4kEcdhPL7sElx0oKQt84h8Rz7PYE-U6tN8r5XPd0A/s640/Configure+Security+Group.PNG" width="640" /></a></div>
<div>
<ul style="text-align: left;">
<li>Note that AWS console gives a warning regarding port 3389 which is used for RDP. We can safely ignore this rule as we'll use WinRM. RDP is not an option with Nano server.</li>
<li>Continue as usual and use an existing key pair or let AWS generate a new key pair to be used for windows password retrieval.</li>
</ul>
<h3 style="text-align: left;">
</h3>
<h3 style="text-align: left;">
Connecting to the Nano server instance:</h3>
After the instance status becomes "running" and all status checks pass, observe the public IP of the instance. To manage this server, we'll use WinRM (Windows Remote Management) over HTTP. To be able to connect the machine, we need to add it to the trusted hosts as follows:<br />
<ul style="text-align: left;">
<li>Open PowerShell in administrator mode</li>
<li>Enter the following commands to add the server : (assuming the public IP is 52.59.253.247)</li>
</ul>
</div>
<div style="text-align: left;">
<pre><code>$ip = "52.59.253.247"
Set-Item WSMan:\localhost\Client\TrustedHosts "$ip" -Concatenate -Force</code></pre>
</div>
<div>
<br /></div>
Now we're ready to connect to the Nano server:<br />
<pre><code>Enter-PSSession
-ComputerName $ip -Credential "~\Administrator"</code></pre>
<br />
<br />
PowerShell will ask for the password which you can retrieve from AWS console using the "Get Windows Password" menu option and uploading your key pair you saved on your local machine.<br />
<br />
If everything goes well, all PowerShell commands you'll enter from now on will be executed on the remote server. So now let's reset the administrator password for the Nano instance:<br />
<pre><code>$pass = ConvertTo-SecureString -String "MyNewPass" -AsPlainText -Force
Set-LocalUser -Name Administrator -Password $pass</code></pre>
<pre><code>Exit </code></pre>
<br />
This will change the password and disconnect. To connect again, we can use the following commands and use the new password:<br />
<pre><code>$session = New-PSSession -ComputerName $ip -Credential "~\Administrator"
Enter-PSSession $session</code></pre>
<br />
<br />
<br />
<h3 style="text-align: left;">
Installing IIS:</h3>
As Nano is a "Just Enough" OS. Feature binaries are not included by default. We'll use external package repositories to install other features like IIS, Containers, Clustering, etc. This is very similar to apt-get and yum tools in the Linux world and the windows alternative is <a href="http://www.oneget.org/" target="_blank">OneGet</a>. The <a href="https://github.com/OneGet/NanoServerPackage" target="_blank">NanoServerPackage</a> repository has instructions regarding adding the Nano server package source which depends on the Nano server version. We know that the AWS AMI is based on the released version, but it doesn't harm to do a quick check:<br />
<pre><code>Get-CimInstance win32_operatingsystem | Select-Object Version</code></pre>
<br />
The version in my case is 10.0.14393. So to install the provider, we'll run the following:<br />
<pre><code>Save-Module -Path "$env:programfiles\WindowsPowerShell\Modules\" -Name NanoServerPackage -minimumVersion 1.0.1.0
Import-PackageProvider NanoServerPackage</code></pre>
<br />
Now let's explore the available packages using:<br />
<pre><code>Find-NanoServerPackage</code></pre>
or the more generic command:<br />
<pre><code>Find-Package -ProviderName NanoServerPackage</code></pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm2j-FIZv9AZ98NpQGPl2oeksDvXGdlg8E_L_9uHetb3Cl7FI3SLhcGMbXEWFQyzp-D7lOhRh3FLNjN5CQEZurBhXDgrYLiIDmiOGt-7g9e_O9-Wh4XytdbcZ90qcZK-gGSxXQqrwI15U/s1600/Find-Package.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm2j-FIZv9AZ98NpQGPl2oeksDvXGdlg8E_L_9uHetb3Cl7FI3SLhcGMbXEWFQyzp-D7lOhRh3FLNjN5CQEZurBhXDgrYLiIDmiOGt-7g9e_O9-Wh4XytdbcZ90qcZK-gGSxXQqrwI15U/s640/Find-Package.PNG" width="640" /></a></div>
We'll find the highlighted IIS package. So let's install it and start the required services:<br />
<pre><code>Install-Package -ProviderName NanoServerPackage -Name Microsoft-NanoServer-IIS-Package
Start-Service WAS
Start-Service W3SVC</code></pre>
<br />
<br />
Now let's point our browser to the IP address of the server. And here is our beloved IIS default page:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiyc7aUBvFgL7T5XbBCmUjWVCDt_8a0V1616hOiOOrWgFYAL4al9NxdWpy5sXvhNoNaNhtZBMhYI1WUSLZ3k1IpSkZeHn-s380A_jIy2xM04O3UptBC0Qihd0FdkMH-IvVMpb3u1BFdLw/s1600/IIS+default.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="420" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiyc7aUBvFgL7T5XbBCmUjWVCDt_8a0V1616hOiOOrWgFYAL4al9NxdWpy5sXvhNoNaNhtZBMhYI1WUSLZ3k1IpSkZeHn-s380A_jIy2xM04O3UptBC0Qihd0FdkMH-IvVMpb3u1BFdLw/s640/IIS+default.PNG" width="640" /></a></div>
<br />
<h3 style="text-align: left;">
Uploading a basic HTML page:</h3>
Just for fun, create a basic HTML page on your local machine using your favorite tool and let's upload it and try accessing it. First enter the <b>exit </b>command to exit the remote management session and get back to the local computer. Note that in a previous step, we had the result of the <b>New-PSSession</b> in the <b>$session</b> variable so we'll use it to copy the HTML page to the remote server over the management session:<br />
<pre><code>Copy-Item "C:\start.html" -ToSession $session -Destination C:\inetpub\wwwroot\</code></pre>
<br />
Navigate to http://nanoserverip/start.html to verify the successful copy of the file.</div>
<br />
<br />
<h3 style="text-align: left;">
Conclusion:</h3>
Nano server is a huge step forward to enable higher density of infrastructure and applications especially on the cloud. However it requires adopting a new mindset and a set of tools to get the best of it.<br />
In this post I just scratched the surface of using Nano Server on AWS. In future posts we'll explore deploying applications on it to get real benefits.<br />
<br /></div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com8tag:blogger.com,1999:blog-2496415891665263000.post-2947685935627141022016-06-25T15:19:00.000+02:002016-06-25T15:20:25.914+02:00Agile and Continuous Delivery Awareness Session<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
This is a recording of a talk that I and Mona Radwan from http://www.agilearena.net/ gave at the Greek Campus in Cairo.<br />
My part was focusing on the value of Continuous Delivery from a business perspective and the related technical practices required to achieve it.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="320" src="https://www.youtube.com/embed/57RbT5-nzhM" width="570"></iframe>
</div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-23884621744481909392016-05-20T23:03:00.002+02:002016-05-20T23:05:29.874+02:00Introduction to AWS video [Arabic] <div dir="ltr" style="text-align: left;" trbidi="on">
My video "Introduction to AWS [Arabic]" on Youtube.<br />
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="320" src="https://www.youtube.com/embed/QQ7gmr6RPlI" width="570"></iframe></div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com1tag:blogger.com,1999:blog-2496415891665263000.post-35387241571793848492016-02-27T17:28:00.000+02:002016-02-27T17:35:38.793+02:00AWS Elastic Load Balancing session stickiness - Part 2<div dir="ltr" style="text-align: left;" trbidi="on">
In my previous post "<a href="http://blog.heshamamin.com/2016/01/aws-elastic-load-balancing-session.html" target="_blank">AWS Elastic Load Balancing session stickiness</a>" I demonstrated the use of AWS ELB Load Balancer Generated Cookie Stickiness. In this post we'll use application generated cookie to control session stickiness.<br />
To demonstrate this feature, I created a simple ASP.NET MVC application that just displays some instance details to test the load balancing.<br />
<br />
Starting from the default ASP.NET MVC web application template, I modified the Index action of the HomeController:<br />
<br />
<script src="https://gist.github.com/heshamamin/57a791dcb26fb71a3a22.js"></script>
<br />
<br />
Similar to what I've done in the previous posts using Linux shell scripts, this time I'm using C# code to request instance metadata from the http://169.254.169.254/latest/meta-data/ URL then I store the host name and IP address in the ViewBag object and display them in the view:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2MUeE-GyM2kSHMt5dd3lMk_Z4qufm49dcrmsV7fQD99bE1vG9P4Pz648u5Nt9pNIaoFRRHv7fMkZrNWxI4DkZ_keUKqeHPSl97omQ2LT_Oxzg6oO7hDY7BiCu6TZ42UD4monMKg0sDfA/s1600/ELB-no-sticky.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2MUeE-GyM2kSHMt5dd3lMk_Z4qufm49dcrmsV7fQD99bE1vG9P4Pz648u5Nt9pNIaoFRRHv7fMkZrNWxI4DkZ_keUKqeHPSl97omQ2LT_Oxzg6oO7hDY7BiCu6TZ42UD4monMKg0sDfA/s400/ELB-no-sticky.PNG" width="400" /></a></div>
<br />
I deployed the application to two EC2 Windows 2012 R2 instances. As expected, using the default ELB settings, requests will be routed randomly to one of the instances. This can be tested by looking at the host name and IP displayed in the response.<br />
<br />
Looking to the request and response cookies, we can find the asp.net session cookie added:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWVwMS3y4nMX4zB1ZSA5gGazolkBkq2vcW0Df1fYSu3R7eQjNU-QgjKIHQA1TTmNdScEDYnc2dd65Un4ugYakfG0gkQRkDukxvANOfHX4bgnWxyxErxOr0csW25ohOZzKR7a1cxaSGc8U/s1600/session-cookie.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="115" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWVwMS3y4nMX4zB1ZSA5gGazolkBkq2vcW0Df1fYSu3R7eQjNU-QgjKIHQA1TTmNdScEDYnc2dd65Un4ugYakfG0gkQRkDukxvANOfHX4bgnWxyxErxOr0csW25ohOZzKR7a1cxaSGc8U/s400/session-cookie.PNG" width="400" /></a></div>
<br />
To configure stickiness based on the <b>ASP.NET_SessionId</b> cookie, edit the stickiness configuration and enter the cookie name:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGv-ptmFkbco7fucIU0qfTtryOaajU8_NsC0rrPdwSZyzI-oC1R9W5ex7lKLF6DXFzjDcVsAhbvCzVS5l_qn2Li54YRsunciduij7E6emfNL8wR4hpuE0sqPSGq8hq3gnCPxlZCzzAZa0/s1600/ELB-enable-sticky-session.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGv-ptmFkbco7fucIU0qfTtryOaajU8_NsC0rrPdwSZyzI-oC1R9W5ex7lKLF6DXFzjDcVsAhbvCzVS5l_qn2Li54YRsunciduij7E6emfNL8wR4hpuE0sqPSGq8hq3gnCPxlZCzzAZa0/s640/ELB-enable-sticky-session.PNG" width="640" /></a></div>
<br />
Checking the cookies, we find that ELB generates a cookie named "<b>AWSELB</b>". As <a href="http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html" target="_blank">documented</a>: "<i>The load balancer only inserts a new stickiness cookie if the application response
includes a new application cookie.</i>"<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqIaaZXUsdG2X2VimccmTs5oOBZcoAhXv2lMXReHQiDSfM5pBxsh0n95lbWPkUxnhQXwweiGzdkcfj0wfEZbBuptMncI4jqFhEA3yHwBueApdGnfXtl_DxSJ8hGeGzVUzPAt4L8xoNd9Y/s1600/elb-response-cookies.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="97" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqIaaZXUsdG2X2VimccmTs5oOBZcoAhXv2lMXReHQiDSfM5pBxsh0n95lbWPkUxnhQXwweiGzdkcfj0wfEZbBuptMncI4jqFhEA3yHwBueApdGnfXtl_DxSJ8hGeGzVUzPAt4L8xoNd9Y/s640/elb-response-cookies.PNG" width="640" /></a></div>
<br />
Now the browser will send back both the session and ELB cookies:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOnYrgjMJg1BfDoV5c7un1s9YB7FPMGF549GA05H-AEypdf-BeYaTXqtnE67wdii-o69sZv-1kZ5A3G7_iT5tv35QdOHKLjAth13ah19Dvw-g2wCnZUbEFQTe1J1c2zqRlekvYV0fCTOQ/s1600/elb-request-cookies.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="90" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOnYrgjMJg1BfDoV5c7un1s9YB7FPMGF549GA05H-AEypdf-BeYaTXqtnE67wdii-o69sZv-1kZ5A3G7_iT5tv35QdOHKLjAth13ah19Dvw-g2wCnZUbEFQTe1J1c2zqRlekvYV0fCTOQ/s640/elb-request-cookies.PNG" width="640" /></a></div>
<br />
Still my preference for maintaining session state is to use a distributed cache service like Redis or even SQL server. Because in case an instance goes down or is removed from an auto-scaling the user will lose his session data in case it's stored in memory.</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-22171718800543397632016-02-06T18:21:00.001+02:002016-02-06T18:21:17.843+02:00Introduction to AWS presentation<div dir="ltr" style="text-align: left;" trbidi="on">
My Introduction to AWS presentation that I presented at the Architecture Titans technical club.
<iframe src="//www.slideshare.net/slideshow/embed_code/key/rC7GC3ttdZ8doF" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/HeshamAmin/aws-intro-57469965" title="Introduction to AWS" target="_blank">Introduction to AWS</a> </strong> from <strong><a href="//www.slideshare.net/HeshamAmin" target="_blank">Hesham Amin</a></strong> </div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-54944489167815573832016-01-04T19:56:00.001+02:002016-01-04T19:56:32.001+02:00AWS Elastic Load Balancing session stickiness<div dir="ltr" style="text-align: left;" trbidi="on">
In a previous post "<a href="http://forloveofsoftware.blogspot.com/2015/05/configuring-and-testing-aws-elastic.html" target="_blank">Configuring and testing AWS Elastic Load Balancer</a>" I described how to configure AWS ELB to distribute load on multiple web servers.<br />
We observed that the same client might get routed to a different EC2 instance. Some applications require the user to always be directed to the same instance during his session. This is the case when an in-memory session state is used or any other application specific reason. This requirements is often referred to as session stickiness. <br />
AWS ELB offers two ways to provide session stickiness: using a cookie provided by the application, or using a cookie generated by ELB. <br />
<br />
<h3 style="text-align: left;">
Load Balancer Generated Cookie Stickiness</h3>
<div style="text-align: left;">
<b>Using an Expiring cookie</b></div>
<div style="text-align: left;">
</div>
<div style="text-align: left;">
Using the same configuration as the previous post, the load balancer will have the stickiness configuration set to "Disabled". </div>
<div style="text-align: left;">
To change this behavior:</div>
<ol style="text-align: left;">
<li>Click "Edit" link</li>
<li>Select "Enable Load Balancer Generated Cookie Stickiness" option.</li>
<li>As a testing value, enter 60 as an expiration period.</li>
</ol>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div style="text-align: left;">
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQG5rsoVuI0iZ9f2cTgJkGb8zbIcxGGKibCpTh84pmP5oIjLbTRQQyRyR88sfqrZ-7tg2Q3LcNOtqhVQ3VMqmagxBAKp66wYdX5xdQ1JQoi-UlJZ7d5-okAL-Oix8-1x3CwrcuReu5z4w/s1600/ELB.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="191" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQG5rsoVuI0iZ9f2cTgJkGb8zbIcxGGKibCpTh84pmP5oIjLbTRQQyRyR88sfqrZ-7tg2Q3LcNOtqhVQ3VMqmagxBAKp66wYdX5xdQ1JQoi-UlJZ7d5-okAL-Oix8-1x3CwrcuReu5z4w/s400/ELB.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Editing stickiness properties of ELB</td></tr>
</tbody></table>
<br />
Now, let's start testing the effect of this new configuration. Open the test url (<i>for example: http://test-elb-834781956.eu-west-1.elb.amazonaws.com/cgi-bin/metadata.sh, check the <a href="http://forloveofsoftware.blogspot.com/2015/05/configuring-and-testing-aws-elastic.html" target="_blank">previous post</a> for more details</i>). Using fiddler or network tab in developer tools of your favorite browser, you can observe that the response includes this header:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="-webkit-text-stroke-width: 0px; background-color: white; color: #222222; display: inline !important; float: none; font-family: Consolas, 'Lucida Console', monospace; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 1; word-spacing: 0px;"><code>Set-Cookie: AWSELB=A703B168326729FE0B7D2675641656C1889E580D7525169B4BE36A819D9F2A18BE64415E6B0C90F5F2AC8CB0CFAF8DABE929DB27D5077D6FF616065A5BAF81DDB430BE92;PATH=/;MAX-AGE=60</code>
</span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
As we see, it's a cookie named "AWSELB" with max-age of 60 seconds. and it applies to the whole site.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDKfXqpMM9xzDPjt3K4DjeA9YIXqk1CQsUeOkP7TBnnsp_VSUH2w9kjsJi3xz95ChDXJAZAxDfU8cHXIcRAz4FPOXqz3S7MFP63TuK8ZW-p_QE2ZzDm-GxITrArNeeBs4EvmKg3mEOFFU/s1600/NLB-Set-Cookie-01.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="72" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDKfXqpMM9xzDPjt3K4DjeA9YIXqk1CQsUeOkP7TBnnsp_VSUH2w9kjsJi3xz95ChDXJAZAxDfU8cHXIcRAz4FPOXqz3S7MFP63TuK8ZW-p_QE2ZzDm-GxITrArNeeBs4EvmKg3mEOFFU/s640/NLB-Set-Cookie-01.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">AWSELB cookie in in the response as appears in Chrome dev tools</td></tr>
</tbody></table>
</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
If you refresh the page, you'll find that the browser sends the cookie as expected:<br />
<br />
<span style="-webkit-text-stroke-width: 0px; background-color: white; color: #222222; display: inline !important; float: none; font-family: Consolas, 'Lucida Console', monospace; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: left; text-indent: 0px; text-transform: none; white-space: pre-wrap; widows: 1; word-spacing: 0px;"><code>Cookie: AWSELB=A703B168326729FE0B7D2675641656C1889E580D7525169B4BE36A819D9F2A18BE64415E6B0C90F5F2AC8CB0CFAF8DABE929DB27D5077D6FF616065A5BAF81DDB430BE92</code></span><br />
<br /></div>
<div style="text-align: left;">
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3g3H326wsVDq5I3dG7Jju6E6TzeZQD4nU4LcHIi6D2j5xjnVUXgX8TkJKMFwlx0jG8Byy8R_YbGBG3SFNE0fK8QYxHkl_MDLQHFQFAPVOE7aKWnA3GopB2QxvCv9AcpFHAThrH2InY14/s1600/NLB-Cookie-01.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3g3H326wsVDq5I3dG7Jju6E6TzeZQD4nU4LcHIi6D2j5xjnVUXgX8TkJKMFwlx0jG8Byy8R_YbGBG3SFNE0fK8QYxHkl_MDLQHFQFAPVOE7aKWnA3GopB2QxvCv9AcpFHAThrH2InY14/s640/NLB-Cookie-01.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The cookie is sent by the browser, and the response does not include a new cookie</td></tr>
</tbody></table>
<br />
But the response does not include the cookie. So it will expire after 60 seconds and the browser will not send it after expiration. Refreshing the browser several times will direct the traffic to the same EC2 instance, we can verify this by examining the response which looks like:</div>
<div style="text-align: left;">
<pre><code>
Host name:
ec2-52-30-170-211.eu-west-1.compute.amazonaws.com
Public IP:
52.30.170.211</code></pre>
<br />
<div style="text-align: left;">
As soon as the cookie is still active, the request is directed to the same instance. But what happens after the max-age passes?</div>
<div style="text-align: left;">
A new cookie will be generated and you might be directed to one of the other web servers, and a new cookie with another value will be generated by ELB:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<code>Set-Cookie:AWSELB=A703B168326729FE0B7D2675641656C1889E580D7525169B4BE36A819D9F2A18BE64415E6B0C90F5F2AC8CB0CFAF8DABE929DB27D5077D6FF616065A5BAF81DDB430BE92;PATH=/;MAX-AGE=60</code></div>
<div style="text-align: left;">
<br />
Notice that the value of the cookie has changed. And after this cookie expires, a new cookie might be generated using the old value pointing to the first instance.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<br />
<b>Using an ELB cookie without expiration:</b></div>
<div style="text-align: left;">
If the expiration values is left blank, then the behavior differs. The cookie will be generated without the max-age value. And the browser will send the same cookie until it's closed.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<code>Set-Cookie:AWSELB=A703B168326729FE0B7D2675641656C1889E580D7525169B4BE36A819D9F2A18BE64415E6B0C90F5F2AC8CB0CFAF8DABE929DB27D5077D6FF616065A5BAF81DDB430BE92;PATH=/</code></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<br />
<b>What happens when a server goes down?</b></div>
<div style="text-align: left;">
To try this scenario, let's shut down the server which is getting the requests and refresh the browser. This time ELB generates a new cookie pointing to a healthy instance.</div>
<div style="text-align: left;">
<br />
<br />
<b>Summary:</b><br />
ELB has a built-in mechanism to support session stickiness with no code changes from the application side.<br />
Using an expiring cookie might not be the best option to guarantee session affinity as the cookie is not renewed and it seems that there is no way to achieve a sliding expiration window for it. So you might prefer to go with a cookie without expiration.<br />
<br />
In the next post, we'll use the other method available for session stickiness: using Application Generated Cookie Stickiness.</div>
</div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com0tag:blogger.com,1999:blog-2496415891665263000.post-58639941353051783452015-05-15T22:50:00.000+02:002015-05-16T22:43:39.691+02:00Configuring and testing AWS Elastic Load Balancer<div dir="ltr" style="text-align: left;" trbidi="on">
Load balancing is an essential component for the scalability and fault tolerance of web applications. Major cloud computing providers have different offerings for load balancing.<br />
In this post I'll explore AWS's (Amazon Web Services) ELB (Elastic Load Balancing) feature, and test it to see how it distributes the load on front-end web servers, and in case of unavailability of one of the front-end servers, how traffic is directed to the healthy instance(s).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/1/1d/AmazonWebservices_Logo.svg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/1/1d/AmazonWebservices_Logo.svg" height="128" width="320" /></a></div>
<br />
<br />
I'll use Linux based image, but the concepts apply to Windows images. I assume that the reader has the basic knowledge on how to create an AWS account and create EC2 (Elastic Compute Cloud) virtual machine. If not, don't worry, following the steps below will give you a good understanding.<br />
<br />
So the experiment goes as follows:<br />
<br />
<br />
<h4 style="text-align: left;">
1- Create a base image for front-end web servers: </h4>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/9/9d/Ubuntu_logo.svg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/9/9d/Ubuntu_logo.svg" height="74" width="320" /></a></div>
<h4 style="text-align: left;">
</h4>
<ol style="text-align: left;">
<li>Go to AWS console and select "Launch Instance", from the list of images, select "Ubuntu Server 14.04 LTS".</li>
<li>Complete the wizard till you reach the "Configure Security Group" step. In this is the step we select the proper ports we need AWS to open. Select SSH (22) to connect to the instance to configure it, and HTTP (80) to serve web traffic.</li>
<li>When you're prompted to select the key pair, make sure to choose an existing one you have already downloaded or create a new one and keep it in a safe place.</li>
<li>Then Launch the instance.</li>
</ol>
<br />
<b>Note:</b> When I first stared using AWS, and being from a windows background, the
term "Security Group" was a bit confusing to me, it's about firewall
rules not security groups in the sense of Active Directory Groups.<br />
<br />
<div>
<h4 style="text-align: left;">
2- Configure Apache web server</h4>
<div style="text-align: left;">
The image does not have a web server installed by default, so I'll SSH into the instance and install it.</div>
<div style="text-align: left;">
If you're using MAC or Linux, you should be able to run SSH directly. For Windows users, you can use <a href="http://www.putty.org/" target="_blank">Putty</a>.</div>
<ol style="text-align: left;">
<li>Copy the public IP of the running instance you just created. </li>
<li>Use SSH to connect using this command: ssh <ip> -l ubuntu -i <path .pem="" key="" to="">. for example: ssh 54.72.151.182 -l ubuntu -i mykey.pem . note that <b>ubuntu</b> is the username for the image we created this machine from. the <b>.pem</b> file acts as a password.</path></ip></li>
<li>Now we are inside the instance. It's time to install and configure Apache:</li>
</ol>
<div>
<pre><code>
sudo su
apt-get install apache2
sudo a2enmod cgi
service apache2 restart
</code></pre>
<br />
The above commands simply do the following:<br />
<ul style="text-align: left;">
<li>Elevate privileges to run as a super user to be able to install software.</li>
<li>Install apache using the package manager.</li>
<li>Enable CGI, I'll show you why later</li>
<li>Restart apache so that CGI configuration takes effect.</li>
</ul>
<br />
Now it's time to test the web server. Visit http://INSTANCE_IP<ip> and you should be welcomed with the default apache home page.</ip><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm6gmwel75j3-mYq621muRGRWNwI2jBImpBrweXJF8JAKgpEpAJzKbR2VjBvG2pDBBNa1C3_kjXf2ivSwBPPJrP0t1V7fwVakZ43u9BGbBDiIWZeZFP6w2BQcGoggaCK54g_Uu0UeN4O0/s1600/apache-ubuntu.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm6gmwel75j3-mYq621muRGRWNwI2jBImpBrweXJF8JAKgpEpAJzKbR2VjBvG2pDBBNa1C3_kjXf2ivSwBPPJrP0t1V7fwVakZ43u9BGbBDiIWZeZFP6w2BQcGoggaCK54g_Uu0UeN4O0/s400/apache-ubuntu.png" width="400" /></a></div>
<br />
<br />
<h4 style="text-align: left;">
3- Create a script to identify the running instance</h4>
<div style="text-align: left;">
To test ELB, I need to identify which instance served the request just by looking into the response to a web request. Now I have 2 options: Create static pages on each web fron-end or create some dynamic content that identifies the instance. And I prefer the latter option as I'll use the same image for all front-ends.</div>
<div style="text-align: left;">
EC2 has a nice feature called instance metadata. It's an endpoint accessible from within EC2 instances that can be called to get information about it. From SSH terminal try:</div>
<br />
<br />
<code>curl http://169.254.169.254/latest/meta-data/</code>
<br />
<br />
A list of available meta-data will be shown:<br />
<pre><code>
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services
</code></pre>
<br />
Appending any of them to the URL will show the value. For example: <br />
<br />
<br />
<code>curl http://169.254.169.254/latest/meta-data/public-hostname</code>
<br />
<code>curl http://169.254.169.254/latest/meta-data/public-ipv4</code>
<br />
<br />
And I'll use these two meta-data items to identify the instances by showing them within a bash script and then serve it from apache. cd into /usr/lib/cgi-bin<br />
<br />
<code>cd /usr/lib/cgi-bin</code>
</div>
<div>
<br />
This is the default location that apache uses to serve CGI content. That's why I enabled CGI in a previous step.<br />
in that folder I'll create a bash script that shows the output of the meta-data. use any text editor. For example run <b>nano</b> in the command line and paste the below script:<br />
<pre><code>
#!/bin/bash
echo "Content-type: text/text"
echo ''
echo 'Host name:'
curl http://169.254.169.254/latest/meta-data/public-hostname
echo ''
echo 'Public IP:'
curl http://169.254.169.254/latest/meta-data/public-ipv4
</code></pre>
<br />
<br />
If using nano, ctrl+X, y. save as <b>metadata.sh</b><br />
<br />
Now we need to grant execute permission on this file:<br />
<br />
<code>chmod 755 /usr/lib/cgi-bin/metadata.sh</code>
<br />
<br />
To test the configuration, browse to http://<ip><ip>INSTANCE_IP/cgi-bin/metadata.sh</ip></ip><br />
My results look like:<br />
<br />
<pre><code>Host name:
ec2-54-72-151-182.eu-west-1.compute.amazonaws.com
Public IP:
54.72.151.182
</code></pre>
<br />
<b>Note:</b> I'm not advising using bash scripts in production web sites. It just was the easiest way to spit out info returned from the meta-data endpoints with minimal effort.<br />
<br />
<h4 style="text-align: left;">
4- Create 2 more front-ends</h4>
<div style="text-align: left;">
<span style="font-weight: normal;">Now we </span>have an identifiable instance. Let's create more of it.</div>
<ol style="text-align: left;">
<li>Stop the instance from the management console</li>
<li>After the instance has stopped, right click -> image -> create image.</li>
<li>Choose and appropriate name and save.</li>
<li>Navigate to AMI (Amazon Machine Image) and check the creation status of the image.</li>
<li>Once the status is <b>available</b><span style="color: #cc0000;"></span> click launch</li>
<li>In the launch instance wizard, select to launch 2 instances</li>
<li>Select the same security group as the one used before, it will have both 22 and 80 ports open.</li>
<li>Start the original instance. </li>
<li>Now we have 3 identical servers.</li>
<li>Using the IP address of any instance, navigate to the CGI script, for example: http://52.17.134.221/cgi-bin/metadata.sh</li>
</ol>
Note that most probably the IP of the first instance is now different after restart.<br />
<br />
<br />
<h4 style="text-align: left;">
5- Create an ELB instance</h4>
<ol style="text-align: left;">
<li>In AWS console, navigate to "Load Balancers".</li>
<li>Click "Create Load Balancer"</li>
<li>Make sure it's working on port 80</li>
<li>Select the same security group</li>
<li>In the health check, in the ping path, enter "/". This means that ELB will use the default apache page for health check. In production, it might not be a good idea to make your home page the health check page.</li>
<li>For quick testing, make the "Healthy Threshold" equal to 3.</li>
</ol>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLPCTQEqm_8046HezFgf0vK7ew-Ez6XcL4N2wZVNiME6rwAR6q11YCFZntBL62FPSk3XILWsfxhnGHrZI1pxEBWjZ-kCog-yI5p7uOFmn4zNjBT2TLdd1WhihyrQQKYm1ln9qppHOcCIA/s1600/elb-health-check.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLPCTQEqm_8046HezFgf0vK7ew-Ez6XcL4N2wZVNiME6rwAR6q11YCFZntBL62FPSk3XILWsfxhnGHrZI1pxEBWjZ-kCog-yI5p7uOFmn4zNjBT2TLdd1WhihyrQQKYm1ln9qppHOcCIA/s400/elb-health-check.png" width="400" /></a></div>
<br />
<br />
Now a bit of explanation is required. This configuration tells ELB to check for the healthiness of a front-end instance every 30 seconds. A check is considered successful if the server responds in 5 seconds.</div>
<div>
If a healthy instance does not respond with that period for 2 consecutive failures, it's considered unhealthy. And similarly, an unhealthy instance is considered healthy again if it responds to the check 3 consecutive times.</div>
<div>
<br /></div>
<div>
Now select the 3 instances to use for load balancing. And wait until the ELB instance is created and the 3 instances in the "instances" tab are shown <b>InService</b>. </div>
<div>
<br /></div>
<div>
Now in the newly create ELB, select the value of the DNS name (like <b>test-elb-1856689463.eu-west-1.elb.amazonaws.com</b>) and navigate to the URL of the metadata page. My url looked like:<br />
http://test-elb-1856689463.eu-west-1.elb.amazonaws.com/cgi-bin/metadata.sh</div>
<div>
<br /></div>
<div>
The data displayed in the page will belong to the instance that actually served the request. Refresh the page and and see how the response changes. In my case ELB worked in a round robin fashion and the responses where:</div>
<div>
<br /></div>
<pre><code>
Host name:
ec2-52-17-134-221.eu-west-1.compute.amazonaws.com
Public IP:
52.17.134.221
Host name:
ec2-52-16-189-41.eu-west-1.compute.amazonaws.com
Public IP:
52.16.189.41
Host name:
ec2-52-17-65-93.eu-west-1.compute.amazonaws.com
Public IP:
52.17.65.93
</code></pre>
</div>
<div>
<br /></div>
<div>
Inspect the network response using F12 tools and note the headers:</div>
<div>
<pre><code>
HTTP/1.1 200 OK
Content-Type: text/text
Date: Sat, 16 May 2015 19:12:38 GMT
Server: Apache/2.4.7 (Ubuntu)
transfer-encoding: chunked
Connection: keep-alive
</code></pre>
</div>
<div>
</div>
<div>
</div>
<div>
<br />
<div>
<br />
<b>Note</b>: nothing special as there is no session affinity.<br />
<br />
<br />
<h4 style="text-align: left;">
6- Bring an instance down</h4>
<div style="text-align: left;">
Now, let's simulate an instance failure. Let's simply stop the apache service on one of the 3 front-ends. So ssh into one of the 3 instances and run:<br />
<br /></div>
<code>sudo service apache2 stop</code>
</div>
<div>
<br />
Refresh the page pointing to the ELB url, note that after a few seconds, you only get responses from the 2 running instances. After about 1 minute, the instance is declared <b>OutOfService</b> in the Instances tab of ELB.<br />
<br />
<h4 style="text-align: left;">
</h4>
<h4 style="text-align: left;">
7- Bring it back!</h4>
<ol style="text-align: left;">
</ol>
This time, turn on apache service by running:</div>
<div>
<br />
<code>sudo service apache2 start</code>
</div>
<div>
<br />
Wait about one and half minutes, the instance is back to <b>InService</b> status and you start to get responses from it.</div>
<div>
The "Healthy Hosts ( Count )" graph shows a very good representation of what happened:<br />
<ol style="text-align: left;">
</ol>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqTLl5uDST2r5O8TNxR4c2s2WBWYMeGomERDKWBvFvM1jj2NQhqdZKTjB9itrTQVz0qKgLKSa74DtcWmg0lQsuRBfpeJMaX7ykIS38zkRAoswOxqdq965pAoMgFZmj6QjrxJT1SecZxB0/s1600/Healthy-hosts-elb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="246" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqTLl5uDST2r5O8TNxR4c2s2WBWYMeGomERDKWBvFvM1jj2NQhqdZKTjB9itrTQVz0qKgLKSa74DtcWmg0lQsuRBfpeJMaX7ykIS38zkRAoswOxqdq965pAoMgFZmj6QjrxJT1SecZxB0/s400/Healthy-hosts-elb.png" width="400" /></a></div>
</div>
<div>
<h4 style="text-align: left;">
8- Turn them all off!</h4>
</div>
<div>
They are costing you money, unless you are still under the free tier. It's recommended to terminate any EC2 and ELB instances that are no longer used.</div>
<div>
<br /></div>
<div>
<b>Note:</b><br />
If you intend to leave some instances alive, it's recommended to de-register the instance from ELB when shut down: <span id="goog_1013149912"></span><span id="goog_1013149913"></span><a href="http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-deregister-register-instances.html" target="_blank">http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-deregister-register-instances.html</a></div>
<div>
<h4 style="text-align: left;">
</h4>
<h4 style="text-align: left;">
Summary:</h4>
<div style="text-align: left;">
In this post, we've seen ELB in action using its basic settings. The round robin load balancing worked great and health check made our site available to users by eliminating unhealthy instances.<br />
This works great with web applications that don't require session affinity, for applications that require it, well, that's another post.</div>
</div>
</div>
</div>
Hesham A. Aminhttp://www.blogger.com/profile/00063404912692423973noreply@blogger.com5