Google
 

Sunday, July 23, 2017

My talk at DDDSydney 2017

It was very excising to attend and speak at DDDSydney 2017. a lot of interesting topics have been presented and the organizers have done a good job classifying the sessions into tracks that one can follow to get a complete picture about a certain area of interest. For example my session "Avoiding death by a thousand containers. Kubernetes to the rescue!" was the last in a track that had sessions about microservices and docker. That made it a logical conclusion on how to host containerized microservices in a highly available and easy to manage environment.

In my demos I used AWS. This choice was intentional since AWS doesn't support Kubernetes out of the box as both Google Container Engine (GKE) and Azure Container Service (ACS) do. I wanted to show that Kubernetes could be deployed to other environments as well. Thanks to Kops (Kubernetes Operations) which made it relatively easy to deploy the Kubernetes cluster on AWS.
I this session I showed how to expose services using an external load balancer and how deployments make it easy to declare the desired state of the Pods deployed to Kubernetes. I also demonstrated the very powerful concept of Labels and Selectors which is a loosely coupled way to connect services to the Pods that contain the service logic.


I Also demonstrated how easy it is to perform an updated to the deployment by switching from Nginx to Apache (httpd).
In another demo I wanted to demonstrate how to connect services inside the cluster. I made a simple .net core web application that counts the number of hits each frontend gets. The hit count is stored in a Redis instance that's exposed through a service.


The interesting part is how the web application determines the address of the Redis instance. As the docker image should be immutable once created, configurations should be stored in the environment.

As in the above code snippet, the environment variable REDIS_SERVICE_HOST is used to get the address of the Redis service. This environment variable is automatically populated by Kubernetes since the Redis service is created before the web application deployment. Otherwise DNS service discovery could be used. I used a simple script to hit the web API and the result was. I also manually deleted Pods that host the web API and thanks to Kubernetes' desired state magic it kept creating new instances automatically. And that was the result of hitting the service:


Requests go through AWS load balancing to Kubernetes nodes. The service passes the requests to Pods hosting the API.

Kubernetes is one of the fast moving open source projects and I think the greatest thing about it is the community and wide support. So if you're planning to host containerized workloads, give it a try!