Google
 

Thursday, August 17, 2017

My AWS IaaS playlist for Arabic speakers

If you're an Arabic speaker and interested in learning about AWS IaaS, check my AWS IaaS [Arabic] Youtube playlist. In this series of videos I go step by step creating a scalable, secure web application using AWS infrastructure as a service offering.
I'm following a problem-solution approach. I start with a very basic but functional solution, I identify the challenges the solution has, then I move to the next step in a logical progression towards achieving the end goal.




And if you have no idea what capabilities AWS has, you can check my introductory video. It's a bit dated but still relevant.

Sunday, July 23, 2017

My talk at DDDSydney 2017

It was very excising to attend and speak at DDDSydney 2017. a lot of interesting topics have been presented and the organizers have done a good job classifying the sessions into tracks that one can follow to get a complete picture about a certain area of interest. For example my session "Avoiding death by a thousand containers. Kubernetes to the rescue!" was the last in a track that had sessions about microservices and docker. That made it a logical conclusion on how to host containerized microservices in a highly available and easy to manage environment.

In my demos I used AWS. This choice was intentional since AWS doesn't support Kubernetes out of the box as both Google Container Engine (GKE) and Azure Container Service (ACS) do. I wanted to show that Kubernetes could be deployed to other environments as well. Thanks to Kops (Kubernetes Operations) which made it relatively easy to deploy the Kubernetes cluster on AWS.
I this session I showed how to expose services using an external load balancer and how deployments make it easy to declare the desired state of the Pods deployed to Kubernetes. I also demonstrated the very powerful concept of Labels and Selectors which is a loosely coupled way to connect services to the Pods that contain the service logic.


I Also demonstrated how easy it is to perform an updated to the deployment by switching from Nginx to Apache (httpd).
In another demo I wanted to demonstrate how to connect services inside the cluster. I made a simple .net core web application that counts the number of hits each frontend gets. The hit count is stored in a Redis instance that's exposed through a service.


The interesting part is how the web application determines the address of the Redis instance. As the docker image should be immutable once created, configurations should be stored in the environment.

As in the above code snippet, the environment variable REDIS_SERVICE_HOST is used to get the address of the Redis service. This environment variable is automatically populated by Kubernetes since the Redis service is created before the web application deployment. Otherwise DNS service discovery could be used. I used a simple script to hit the web API and the result was. I also manually deleted Pods that host the web API and thanks to Kubernetes' desired state magic it kept creating new instances automatically. And that was the result of hitting the service:


Requests go through AWS load balancing to Kubernetes nodes. The service passes the requests to Pods hosting the API.

Kubernetes is one of the fast moving open source projects and I think the greatest thing about it is the community and wide support. So if you're planning to host containerized workloads, give it a try!



Saturday, May 20, 2017

Detecting applications causing SQL Server locks

On one of our testing environments, login attempts to a legacy web application that uses MS SQL Server were timing out and failing. I suspected that the reason might be that another process is locking one of the table needed in the login process.
I ran a query similar to this:

SELECT request_mode,
 request_type,
 request_status,
 request_session_id,
 resource_type,
 resource_associated_entity_id,
 CASE resource_associated_entity_id 
  WHEN 0 THEN ''
  ELSE OBJECT_NAME(resource_associated_entity_id)
 END AS Name,
 host_name,
 host_process_id,
 client_interface_name,
 program_name,
 login_name
FROM sys.dm_tran_locks
JOIN sys.dm_exec_sessions
 ON sys.dm_tran_locks.request_session_id = sys.dm_exec_sessions.session_id
WHERE resource_database_id = DB_ID('AdventureWorks2014')


Which produces a result similar to:



It shows that an application is granted exclusive lock on the table EmailAddress, and another query is waiting for a shared lock to read from the table. But who is holding this lock? In my case, by checking the client_interface_name and program_name columns from the result we could identify that a long running VBScript import job was locking the table. I created a simple application that simulates a similar condition which you can check on Github. You can run the application and run the query to see the results.

It's a good practice to include "Application Name" property in your connection strings (as in the provided application source code) to make diagnosing this kind of errors easier.

Saturday, February 18, 2017

Abuse of Story Points

Relative estimates are usually recommended in Agile teams. However nothing mandates a specific sizing units like story points or T-shirt sizing. I believe that - used correctly - relative estimation is a powerful and flexible tool.
I usually prefer T-shirt sizing for road-mapping to determine which features will be included in which releases. When epics are too large and subject to may changes, it makes sense to use an estimation technique that is quick and fun and doesn't give a false indication of accuracy.
On the release level, estimating backlog items using story points helps planning and creating a shared understanding between all team members. However used incorrectly, the team can get really frustrated and might try to avoid story points in favor of another estimation technique.

In a team I'm working with, one of the team members suggested during a sprint retrospective to change the estimation technique from story points to T-shirt sizing. The reasons were:
  • Velocity (measured by story points achieved in a sprint) are sometimes used to compare the performance of different teams.
  • Story points are used as a tool to force the team to do a specific amount of work during a sprint.
Both reasons make a good case against the use of story points.

The first one clearly contradicts with the relative nature of story points as each team has different capacity and baseline for their estimates. Also the fact that some teams use velocity as a primary success metric is a sign of a crappy agile implementation.
The second point is also a bad indicator. The reason is that you simply get what you ask for: If the PO/SM/Manager wants higher velocity then inflated estimates is what (s)he gets. Quite similar to the Observer effect.

Fortunately in our case both of these concerns were based on observations from other teams. Both the Product Owner and Scrum Master were knowledgeable enough to avoid these pitfalls and they explained how our team is using velocity just as a planning tool. However, the fact that some team members might get affected by the surrounding atmosphere in the organization is interesting and brings into attention the importance of having consistent level of maturity and education.

What is your experience with using story points or any other estimation technique? What worked for you and what didn’t? Share your thoughts in a comment below.