Paravirtualization

Paravirtualization is a type of virtualization, which allows us to install and run one or more different operating systems (Guest OS) on a single physical server (Host OS). You might get tempted to compare it with Hypervisor and you wouldn’t be wrong. They have similarities and differences.

A Hypervisor is a software that allows installing one or more different operating systems on a single physical server. The OS which runs the Hypervisor is called as Host OS and the OS which is run by Hypervisor is called Guest OS. Here, guest OS doesn’t have a clue that it is running in a virtualized environment. This is where Hypervisor differs from Paravirtualization.

So how does Paravirtualization work?

Paravirtualization
Paravirtualization – Source: https://wiki.xen.org/wiki/Paravirtualization_(PV)

In virtualization, Hypervisor acts as a relay between guest OS and host OS. In Paravirtualization, guest OS needs to know that it is interacting with Hypervisor and hence needs to make different API calls. This calls for modifications to the guest OS and hence creates a tight coupling between guest OS and Hypervisor used. Since proprietary OS providers such as MS do not allow OS level modifications and hence you would end up using open source operating systems such as Linux. A good combination is Linux with Xen server

You would be wondering if there are any real advantages of paravirtualization. Paravirtualization has shown performance improvements for certain applications or in certain use cases, but it has not shown the reliability. Especially with the rise of software level virtualization, which shows very high reliability, need for Paravirtualization has gone down. The cost associated with guest OS modifications, tight coupling with Hypervisor, is not getting offset by the not-so-predictable performance gains. So, you wouldn’t find many instances where paravirtualization is used.

Related Links

Related Keywords

Hypervisor, Virtualization, Containers

GraphQL

GraphQL is a query language that was developed by Facebook for their internal development. However, later it was made open source in 2015. It is gaining traction since then but has a lot to cover to be called as popular.

GraphQL
GraphQL – query language to fetch and manipulate data

What are the main advantages of GraphQL?

GraphQL highly simplifies the fetching and manipulation of data, even when the operations tend to be complex. In “fetch” type of query, it provides exact data as requested by the client – no more no less. This makes the communication highly efficient. The important advantage is that the response format is specified by the client and is not decided by the server. Also, the structure of the data is not hardcoded as in REST APIs.

GraphQL makes the complex request very simple to implement as compared to REST APIs. e.g. additional filters and options both are pushed into query string parameters when using REST APIs. And that makes the request unreadable and difficult to understand which one is a filter and which one is an option. whereas GraphQL allows forming queries in a nested format, making it readable as well as easy to process.

GraphQL Server implementation is available in several languages including Javascript, Python, PHP, Ruby etc. Readymade server implementation from Facebook is also available, which uses Node.js.

Types of GraphQL operations

As can be guessed, it supports two types of operations:

  • Fetch data
  • Manipulate data (Create, Update, Delete)

Typically, you use GET request to fetch the data and POST request to manipulate the data. Using GET allows the responses to be cached at various levels and adds to the efficiency of the overall architecture.

Both REST and GraphQL support evolution of APIs i.e. support for additional fields and deprecation of unused fields, without impacting any of the endpoints.

You can read more about REST and GraphQL here and here. But essentially, both have their own advantages and in a microservices architecture, both could live together happily!!

Related Links

Related Keywords

Microservices Architecture, REST, Node.js

Serverless Computing

Serverless Computing is a computing model where servers are not used. Are you kidding me? You are right. This is not a case. Serverless Computing is actually a misnomer. What it really means is developers are freed from managing the servers and are free to focus on their code/application to be executed in the cloud.

Serverless Computing
Serverless Computing – You won’t need such a huge space!!

So, how does Serverless Computing really work?

In this computational model, you only need to specify minimum system requirements such as RAM. Based on that the cloud provider would provision the required resources such as CPU and network bandwidth. Whenever you need to execute your code, you hit the endpoint and your code is executed. In the background, the cloud provider, provisions the required resources, executes the code and releases the allocated resources. This allows cloud provider to use the infrastructure for other customers when they are idle. This also gives out the benefit to the customers as they are not charged for the idle time. Win-Win situation for both, isn’t it?

Serverless Computing is popular in microservices architecture. The small pieces of code which form the microservices can be executed using Serverless components very easily. On top of it, cloud provider also handles the scaling up. So if there are more customers hitting your serverless endpoint, cloud provider would automatically allocate more resources and get the code executed.

Any precautions to be taken?

Absolutely. Every developer needs to keep few things in mind while developing code for Serverless computing:

  • The code can’t have too many initiations steps, else it will add to the latency.
  • The code is executed in parallel. So there shouldn’t be any interdependency which would leave the application or data in the incoherent state.
  • You don’t have access to the instance. So you can’t really assume anything about the hardware.
  • You don’t have access to local storage. You need to store all your data in some shared location or central cache.

So who all provide this option?

  • AWS (obviously!!) was the first one to provide this (2014) – AWS Lambda.
  • Microsoft Azure provides Azure Functions
  • Google Cloud provides Google Cloud Functions
  • IBM has OpenWhisk which is open source serverless platform

Support for languages varies from provider to provider. However, all of these offerings support Node.js. Among other languages, Python, Java are more popular.

Related Links

Related Keywords

Serverless Architecture, API Gateway, AWS S3, Microservices Architecture