Cryptocurrency

In simple terms, Cryptocurrency is digital currency that  uses cryptography to ensure security and to regulate the generation of currency.

Examples of Cryptocurrency:

Click here for full list

Blockchain forms the basis of the Cryptocurrency. In Blockchain, the data is stored in verifiable and non-tamperable chain of blocks. This data is stored in a decentralized format i.e. it is stored on multiple computers across the globe. As a result, unavailability of one computer doesn’t stop one from verifying the chain.

Cryptocurrency was invented with the need to have a decentralized system to manage / transact currency. People can use this currency using public key and private key pairs to make a transaction. The transaction is stored using blockchain technology, also referred as ledger. Due to it’s distributed nature and encrypted format, it is very difficult to tamper these transactions.

The algorithm used to generate the Cryptocurrency typically allow a finite sum of currency to be generated. This is different from physical currency which is controlled by governments and federal banks of various countries.

Even if strong encryption techniques have been used, cryptocurrency is not yet fully safe from theft or hacking attempts. Bitcoin itself has seen the theft of cryptocurrency multiple times. However, this technology still gives hope to many observers and experts to have a transportable currency which is outside the influence of governments and central banks across the world.

Exchange value of the cyrptocurrency is also determined by demand and supply. To facilitate the exchange of these currencies, several exchanges are now available across the globe.

Legal status of these currencies varies widely across the globe. Some countries have acknowledge its existence similar to property asset while some have banned. It would take few more years to have wide acceptability of this currency across the world.

Related Links:

Related keywords:

Cryptography, Bitcoin, Litecoin, Namecoin, Digital Currency

REST

Representational State Transfer or REST is a style of architecture in software and has been very popular in recent times. It allows developers to define web services which could accessed using URI (Uniform Resource Identifiers) and some standard verbs such as GET, PUT, DELETE, POST.

This is a stateless implementation of web services where server doesn’t maintain the state of the operations. However, client can retain the state through the session and same could be passed on by the server to other components, if required. The clients request the URL along with the operation and server sends out the response using a standard and well-defined format.

The client requests each resource using URI and represents the action required on the resource using standard verbs. This results in moving the action from URL to HTTP request method. The response could be XML, JSON or any other defined format.

REST
REST – Source – https://www.slideshare.net/david.krmpotic/rest-david-krmpotic

Characteristics of REST:

  • Client – Server Architecture
  • Statelessness
  • Cacheability
  • Layered System
  • Uniform Interface

Another popular choice in this category is SOAP (Simple Object Access Protocol) which is dependent on XML for request and response. Such protocol turns out to be heavy for several applications and in those cases REST is a right choice. SOAP provides built-in error handling, which makes life easy for developers. REST does not have any defined standard and hence there is no well defined way for built-in error handling. SOAP provides clear definitions of the web service using WSDL (Web Services Description Language), whereas there is no such thing in REST architecture. It is pretty much driven by the implementation team.

Since there is no “official” standard for RESTful web APIs, this is referred as architectural style and not “standard” or “protocol”, unlike SOAP.

Related Keywords

SOAP, Software Architecture

Reference Links:

Microservices Architecture

As the name suggests, Microservices Architecture bundles multiple smaller, independent and clearly scoped services together. One can deploy these services independently and they can interact with each other through well defined APIs. This improves the maintainability and scaling of the overall architecture.

While designing a microservice, one should consider end-to-end business capability as criteria and not size of the service. If this breaking of overall functionality into smaller services is not done correctly, it leads to dependent services. It in turn results in loss of inherent advantages of this architecture.

Advantages of Microservices Architecture:

  • Separate teams can maintain each service separately. This brings faster turn around by the teams as they have very clear and well defined scope and gives them full control over deliverable.
  • It is very easy for developers to make changes to one service without affecting other services as long as the intercommunication protocol is maintained. This gives much required independence to the development teams and brings in accountability to maintain their own work.
  • Only a single service or a set of services could be scaled up if business needs. This avoids a single portion of software becoming a bottleneck as is the case with monolithic architecture.
  • Improved fault tolerance. In case any service fails, others can continue providing rest of the services, resulting in higher availability. e.g. If movie recommendation service fails, streaming service or category browsing service can continue.
Microservices Architecture
Microservices Architecture – Source – http://microservices.io/patterns/microservices.html

Drawbacks:

  • If services depend on each other, they could introduce bottlenecks and thus failure of overall architecture. Services need to use other services and fault tolerance needs to be built-in.
  • Since each service is a separate process, they need to communicate with each other over network, adding to the latency. System architects need to consider this aspect while developing services that need very fast response times.

Reference Links:

Related Keywords:

Monolithic Architecture, Cloud

Hypervisor

Hypervisor means supervisor of supervisor – a strong supervisor. It is hardware or software, which allows to create and control one or more virtual systems. Hypervisor are also known as Virtual Machine Monitor(VMM).

A computer, on which Hypervisor runs, is called as “Host Machine”. The operating system is referred as “Host OS”. Hypervisor allows to create one or more virtual systems on top of Host Machine’s hardware. Each virtual system may have different operating system. Such operating system is referred as “Guest OS”.

The need to use hardware more efficiently gave rise to the Hypervisors. They emulate the underlying hardware for the virtual machines as per the configuration.

It is easy to confuse VMs with Containers, however, there is clear differentiation between these two entities. VMs bundle operating systems within them whereas Containers do not.

Types of Hypervisor

  • Bare-Metal – This layer is installed directly at hardware level and doesn’t have any Host OS at all. It directly allows to install guest operating systems. These type of Hypervisors obviously give speed and efficiency as they work directly off the hardware. They give deterministic response times, smaller memory footprint and fine grained control.
  • Hosted – These reside on top of host operating system. These are preferred where speed has lesser consideration, but manageability requirement is higher. Additionally, these are available on a wider range of underlying hardware and are  easier to setup and install.

Type-1 and type-2 hypervisor - By Scsami (Own work) [CC0], via Wikimedia Commons

Examples of Hypervisor:

Opensource XEN, Citrix XenServer, VMWare’s ESX,

References:

Related Keywords:

Containers, Virtualization, Cloud, Paravirtulaization

Containers

Virtualization is a technique of dividing resources of a computer into multiple execution environments. Virtualization can happen at hardware level or at OS Level. Containers are the latest trend in this space. Although the concept is very old, since unix days, this type of virtualization has picked momentum since Docker Inc introduced their technology.

What Are Containers?

Containers provide an isolated space to run a specific application or a set of applications utilizing the underlying Host OS through OS level virtualization. OS Level virtualization allows to run multiple applications “contained” in an isolated space and hence the name. Isolation also provides security.  It would be easy to confuse them with VM (virtual machines).

VMs host their own OS within themselves (Guest OS). They run on top of another OS (known as Host OS) and provide abstraction at hardware level. This guest OS could be different from host OS. They need to bundle all the dependent libraries and applications within and hence turnout to be bulky.

Containers use underlying host OS through OS level virtualization. All the containers running on a given machine need to use the same OS as that of host OS. They can share the libraries with underlying OS and hence turn out to be very lightweight.

Docker Inc. is a leading provider for in this space and has been doing that since 2014.

 

Containers VS VM
(Source: http://www.virtualizationsoftware.com)

As can be seen easily from above diagram, applications running in VM are very bulky as they need to have OS as well bundled with them. Whereas Containers are lightweight and hence multiple instances could be run on single Host OS.

Wikipedia Link:

Container a.k.a OS Level Virutalization

Reference Links:

Primar From Docker – https://www.docker.com/what-container

Comparison with VMs – https://blog.netapp.com/blogs/containers-vs-vms/

Related Keywords:

Virtualization, Hypervisor, Docker, VMWare, Cloud