Paravirtualization

Paravirtualization is a type of virtualization, which allows us to install and run one or more different operating systems (Guest OS) on a single physical server (Host OS). You might get tempted to compare it with Hypervisor and you wouldn’t be wrong. They have similarities and differences.

A Hypervisor is a software that allows installing one or more different operating systems on a single physical server. The OS which runs the Hypervisor is called as Host OS and the OS which is run by Hypervisor is called Guest OS. Here, guest OS doesn’t have a clue that it is running in a virtualized environment. This is where Hypervisor differs from Paravirtualization.

So how does Paravirtualization work?

Paravirtualization
Paravirtualization – Source: https://wiki.xen.org/wiki/Paravirtualization_(PV)

In virtualization, Hypervisor acts as a relay between guest OS and host OS. In Paravirtualization, guest OS needs to know that it is interacting with Hypervisor and hence needs to make different API calls. This calls for modifications to the guest OS and hence creates a tight coupling between guest OS and Hypervisor used. Since proprietary OS providers such as MS do not allow OS level modifications and hence you would end up using open source operating systems such as Linux. A good combination is Linux with Xen server

You would be wondering if there are any real advantages of paravirtualization. Paravirtualization has shown performance improvements for certain applications or in certain use cases, but it has not shown the reliability. Especially with the rise of software level virtualization, which shows very high reliability, need for Paravirtualization has gone down. The cost associated with guest OS modifications, tight coupling with Hypervisor, is not getting offset by the not-so-predictable performance gains. So, you wouldn’t find many instances where paravirtualization is used.

Related Links

Related Keywords

Hypervisor, Virtualization, Containers

Material Design

Don’t get worried by reading the subject. This is not about any civil topic or interior design work. Material Design is a design concept introduced by Google which is prompting designers to rethink the web and mobile app UI. Here material is a metaphor. It is used to help you visualize web or mobile app as a physical object which has it’s own properties and responds to touch, zoom, pinch etc. Applications following material design present uniform behavior across various applications as well as various sizes of devices, thus giving uniform experience to the user.

Where can I see Material Design in action?

If you are on the android phone, it is everywhere in native apps (Android 5 onwards). When you use Chrome browser, check out settings and you will see it there.  If you click on a text field, the placeholder quickly moves to top left. If you click on a tab, the bottom border of the tab is highlighted. The buttons have a shadow effect giving a feeling of depth. All such things are based on Material Design principles.

Material Design

So what are the principles, this is based upon?

  • Each component occupies its own layer as if two physical objects are kept one on top of the other. They will not merge with each other, nor will the pass through each other. In simple terms – they appear as 3D objects.
  • Visual effect on one component will not follow through on other objects.
  • Some components would react to action (like touch, swipe, zoom etc) faster than others.
  • Every user interaction with the screen gives instant feedback to the user in terms action happened.
  • Components are aware of their surroundings. E.g. Upon touching a button, it pushes out certain components to make the place for the input field.
Material Design
Material Design – Example screen – Source: https://material.io

Similar to Bootstrap, there are several components pre-built and available for use to the developers. Refer to this article which has excellent comparison between Bootstrap and Material Design

In addition to Google’s own documentation, there are some very comprehensive articles about this design, check reference section below.

Related Links

Related Words

Bootstrap, Mobile App, Web App, Design, CSS

Kubernetes

Kubernetes is a tool or a system for automating the deployment of containerized applications. In a loose analogy, one can say what Chef is for infrastructure deployments, Kubernetes is for container deployments.

History of Kubernetes

Kubernetes Logo
Kubernetes Logo – Source: GitHub

It was part of Google’s internal tool system called “Borg” which was later made as Open-source in 2015. Internal codename of this project was “Seven” and is reflected in the logo of this system.

Kubernetes has been maintained by Cloud Native Computing Foundation which is a collaboration between Google and Linux Foundation.

What does Kubernete offer?

It offers to manage the “immutable infrastructure” in an automated fashion. Typically, containers are launched using images which do not change. If you want to make changes to the software piece which is part of the container, you simply update the image, launch new containers and throw away the older containers. Kubernetes allows you to do this efficiently.

The lowest level abstraction that Kubernete provides is called a “Pod”. It consists of one or more containers which can run on a same “node”. A Node is a server or VM on which Kubernetes cluster is running. The containers in a Pod could be sharing resources and can communicate with each other. This allows running tightly knit services to be run from the same Pod.

In Kubernetes configuration, Pods are not updated but are simply destroyed. New Pods are created when any of the containers are required to be updated.

“Deployments” are something that runs the Kubernetes cluster. They store the configuration, dependencies, resource requirements and access etc. Kubernete also has a “ReplicationController” which allows you to replicate the Pods as per the need.

Kubernetes
Leading cloud providers like AWS, Microsoft and Google provide the managed Kuberenets services viz Amazon Elastic Container Service (Amazon EKS), Azure Container Service (AKS), Google Kubernetes Engine(GKE).

Related Links

Related Keywords

Docker, Container, DevOps, Chef, Puppet

Tutorial

Chef

“Chef” is an automation platform for DevOps. It helps you manage the configuration of your server(s) in an automated fashion. The configuration includes pushing of certain patches or to push certain configuration files to a set of servers in your server farm. All such activities could be managed using this platform.

Chef
Chef carefully crafting her recipes

What problem does Chef solve?

As your infrastructure grows, it becomes difficult to maintain all the servers at all the times. You may have all your server on-premise (very rare these days), or completely on the cloud or a hybrid setup. Irrespective of the location of the servers, you would want all of them to be manageable. You would want to push changes to the configuration in a jiffy without worrying about how many servers you need to update. In case of 5-10 servers, you may want to do it manually. But as the number grows, this task becomes daunting. And this is precisely where Chef platform comes to the rescue.

How does Chef work?

Chef helps you look at your infrastructure as code! What does that mean? One can write what kind of configuration she wants on a given set of servers – this is known as a recipe. Each such recipe can be tested and pushed to target servers referred to as “nodes”. The recipe is then executed to update the server configuration as mentioned in the recipe. All this can be executed from a single machine – no matter what is the number of servers which need to be updated.

Each node registers itself with the Chef server and pulls the recipes which need to be executed on that node. Even after execution, it periodically checks with Chef server, if new recipes are available. This allows the administrator to push the changes in the configuration without having to log/connect to individual servers.

Chef Block Diagram
Chef Block diagram: Source: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/chef-automation

A collection of recipes is called as a cookbook. There are several such cookbooks are available out of the box or one can pick up a ready cookbook from the marketplace “Chef supermarket”. Of course, you can create your own recipes/cookbooks as well.

Chef has built-in support to build, test, and deploy the recipes. It also provides tools such as Chef Automate, which allows you to monitor and collect data across the servers from a single dashboard. It notifies you if any server is not as per the configuration so that you can fix it.

Because of such features, Chef forms one of the important portions of DevOps toolset.

Related Links

Related Keywords

DevOps, Puppet, Ansible, Saltstack, Fabric

ART – Android RunTime

In simple terms, ART stands for Android RunTime environment i.e. environment used by the Android operating system to run the applications. This is a successor to earlier virtual machine “Dalvik” which was part of the Android operating system until Android 4.4 (KitKat)

What are the key features of Android Runtime (ART)?

ART introduced use of “Ahead of Time (AOT)” compilation of all the applications. When an application is installed on Android system, Java bytecode is translated into machine instructions during the compilation process. This improves the application execution performance tremendously. In earlier days when Dalvik was part of the Android OS, the applications used to be compiled “Just In Time (JIT)” before execution. This used to slow down the execution performance of the applications.

Android Runtime - Android Software Stack
Android Runtime – Android Software Stack. [Source: https://developer.android.com/guide/platform/index.html]
You would be wondering why would someone choose JIT compilation. Remember those were the days when mobiles were not as powerful as today’s mobiles. They used to have lower storage and Android was forced to have lower sized applications as a constraint. The compiled versions of the applications tend to be slightly higher in size and hence this trade-off.

Another drawback of Dalvik Runtime – it used to consume battery as it used to compile the code everytime application is started.

By introducing AOT, Android Runtime saves a lot of battery consumption and also improves the execution performance. Albeit there’s a small price to pay – application installation takes longer as compared to the installation times on Dalvik. This is because the code is compiled during installation. However, as you would agree, this is a small price to pay.

Among other benefits of ART, memory allocation and garbage collection are important ones. It also provides some debugging features and high-level profiling of the applications.

ART provides backward compatibility with Dalvik. So if your application runs well on ART, it will work well on Dalvik too. However, the reverse may not be true.

Related Links:

Related Keywords

Java Runtime Environment (JRE), Dalvik, JIT, AOT, JVM, Bytecode,