Kubernetes

Kubernetes is a tool or a system for automating the deployment of containerized applications. In a loose analogy, one can say what Chef is for infrastructure deployments, Kubernetes is for container deployments.

History of Kubernetes

Kubernetes Logo
Kubernetes Logo – Source: GitHub

It was part of Google’s internal tool system called “Borg” which was later made as Open-source in 2015. Internal codename of this project was “Seven” and is reflected in the logo of this system.

Kubernetes has been maintained by Cloud Native Computing Foundation which is a collaboration between Google and Linux Foundation.

What does Kubernete offer?

It offers to manage the “immutable infrastructure” in an automated fashion. Typically, containers are launched using images which do not change. If you want to make changes to the software piece which is part of the container, you simply update the image, launch new containers and throw away the older containers. Kubernetes allows you to do this efficiently.

The lowest level abstraction that Kubernete provides is called a “Pod”. It consists of one or more containers which can run on a same “node”. A Node is a server or VM on which Kubernetes cluster is running. The containers in a Pod could be sharing resources and can communicate with each other. This allows running tightly knit services to be run from the same Pod.

In Kubernetes configuration, Pods are not updated but are simply destroyed. New Pods are created when any of the containers are required to be updated.

“Deployments” are something that runs the Kubernetes cluster. They store the configuration, dependencies, resource requirements and access etc. Kubernete also has a “ReplicationController” which allows you to replicate the Pods as per the need.

Kubernetes
Leading cloud providers like AWS, Microsoft and Google provide the managed Kuberenets services viz Amazon Elastic Container Service (Amazon EKS), Azure Container Service (AKS), Google Kubernetes Engine(GKE).

Related Links

Related Keywords

Docker, Container, DevOps, Chef, Puppet

Tutorial

Serverless Computing

Serverless Computing is a computing model where servers are not used. Are you kidding me? You are right. This is not a case. Serverless Computing is actually a misnomer. What it really means is developers are freed from managing the servers and are free to focus on their code/application to be executed in the cloud.

Serverless Computing
Serverless Computing – You won’t need such a huge space!!

So, how does Serverless Computing really work?

In this computational model, you only need to specify minimum system requirements such as RAM. Based on that the cloud provider would provision the required resources such as CPU and network bandwidth. Whenever you need to execute your code, you hit the endpoint and your code is executed. In the background, the cloud provider, provisions the required resources, executes the code and releases the allocated resources. This allows cloud provider to use the infrastructure for other customers when they are idle. This also gives out the benefit to the customers as they are not charged for the idle time. Win-Win situation for both, isn’t it?

Serverless Computing is popular in microservices architecture. The small pieces of code which form the microservices can be executed using Serverless components very easily. On top of it, cloud provider also handles the scaling up. So if there are more customers hitting your serverless endpoint, cloud provider would automatically allocate more resources and get the code executed.

Any precautions to be taken?

Absolutely. Every developer needs to keep few things in mind while developing code for Serverless computing:

  • The code can’t have too many initiations steps, else it will add to the latency.
  • The code is executed in parallel. So there shouldn’t be any interdependency which would leave the application or data in the incoherent state.
  • You don’t have access to the instance. So you can’t really assume anything about the hardware.
  • You don’t have access to local storage. You need to store all your data in some shared location or central cache.

So who all provide this option?

  • AWS (obviously!!) was the first one to provide this (2014) – AWS Lambda.
  • Microsoft Azure provides Azure Functions
  • Google Cloud provides Google Cloud Functions
  • IBM has OpenWhisk which is open source serverless platform

Support for languages varies from provider to provider. However, all of these offerings support Node.js. Among other languages, Python, Java are more popular.

Related Links

Related Keywords

Serverless Architecture, API Gateway, AWS S3, Microservices Architecture

VPC – Virtual Private Cloud

When you are hosting your application in the cloud, you would be worried about the security of your resources in the cloud, especially data in transit. No one would want their data to be snooped or would not want unauthorized access to their servers which are in the cloud. For such scenarios, VPC i.e. Virtual Private Cloud comes into the picture.

Virtual Private Cloud is essentially an isolated network (logically) from other tenants’ network on the cloud. VPC is a terminology introduced by AWS, however, other IaaS providers also have similar concepts. In Google Cloud also you get “Virtual Private Cloud”, however in Azure, you get “virtual network”.

Key features of VPC

Since this is your own private network in the cloud, you get a lot of control in configuration and implementation. One can define subnets, routes, network ACL (access control lists). Additionally, you also get control of which subnets can have access to the internet and which do not.

In VPC, the network administrator can set up their resources such as virtual machines, containers or databases. These resources could be in a single subnet or multiple subnets. There could be routes defined which will allow only certain subnets to access a given subnet. This gives a very good control to the network administrator over her network.

Typical VPC implementation in AWS

AWS VPC Implementation
AWS VPC Implementation (Source: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html)

Are there any differences in VPC by AWS and Azure?

Although they are very similar in concept, VPC in AWS and Virtual Network in Azure have some differences.

  • AWS provides a wizard to create Virtual Private Cloud with 4 different basic options. Azure doesn’t have a wizard.
  • AWS allows you to use Security Groups and Network ACL both to control access. However, in Azure, you can use only one of them at a time.
  • AWS provides custom routing tables to control access within the VPC. However Azure doesn’t have such feature.

Related Links

Related Keywords

AWS, Cloud, Azure, Virtual Network, Google Cloud

API Gateway

API Gateway is a server that is a single entry point to your system. As you can imagine, primary function of this server is provide various API endpoints to various clients. It also hides the backend services from the client.

Why should I use API Gateway?

Consider a complex architecture where variety of clients are accessing your system. In this age, monolithic applications are getting outdated whereas microservices based applications are popular because of their efficiency and maintainability. As a result, all of the clients will need to make several calls to get data from various services from your system. So even to render one page, client will end up making several calls. This would be a problem on mobile network, which typically has latency issues. To add to this complexity, each client could have their own requirements. That could also mean client specific code.

All these problems can be solved by implementing an API Gateway. This server provides single point of reference for all the clients. It detects the client and breaks the request into multiple backend requests to fetch the data. This gives additional benefit of consolidating responses from all the backend requests into single response for the client.

API Gateway simplified diagram
API Gateway [Source: https://www.nginx.com/blog/building-microservices-using-an-api-gateway/]

Are there any drawbacks?

Yes, as is the case with almost everything in life! If due care is not taken, API Gateway itself can become bottleneck and heavy. All developers would be required to update API Gateway whenever they make changes to the endpoints or protocol to access their respective services. API Gateway gathers data from multiple webservices. As a result, failure of even one service could lead to unavailability of entire service. Or it could add delays in response to the client.

However, there are already ways to counter these drawbacks. Proper process such as DevOps could lead to removal of API Gateway becoming developer bottleneck. Usage of circuit-breaker libraries such as Netflix Hystrix could avoid overall service breakdown even in case of partial service outage.

Reference Links:

Example Implementations / Providers:

  • AWS API Gateway
  • Azure API Management
  • Vertx
  • JBoss Apiman

Related Keywords

Microservices Architecture, AWS, Azure, Hystrix