Was this helpful?

(0) (0)

What are Containers and Why Do You Need Them?

April 17th, 2018 8 min read

Containers exist because they solve an important problem: how to make sure that software runs correctly when it is moved from one computing environment to another.

In an agile, DevOps world, this has become more critical than ever. Agile methodologies are based on frequent and incremental changes in the code versions, such that frequent testing and deployment are required.

The DevOps engineer has to manage the IT infrastructure as per the requirement of supported software code that is dedicated in a hybrid multi-tenant environment. There is a need to have the required resources for provisioning and getting the appropriate deployment model and also validating and monitoring the performance.

DevOps engineers frequently move software from a test environment to a production environment and to ensure that the required resources for provisioning and getting the appropriate deployment model are in place, and also validating and monitoring the performance.

For example, a different version of a required software library may be running. Or maybe the target machine uses a different compiler or loader. Incompatibilities of this kind can cause headaches for DevOps engineers.

The Initial Solution: Virtualization

One well-known solution to this problem is virtualization. Virtualization has been around for decades and is almost universal in data centers of all sizes. How does virtualization solve the problem?

For servers in a datacenter, the norm is for one machine to run a single operating system which is inefficient since the operating system is typically large and has to be loaded on each machine. Virtualization allows multiple operating systems to be run completely independently on a single machine.

Virtualization uses specialized software called a Hypervisor that encapsulates a guest version of the operating system and emulates a range of hardware resources like server CPU, memory, hard disk and other resources so that these resources can be shared across multiple virtual machines. In this way, multiple operating systems can be run on a single machine. 

The benefits of this approach are clear: reduced need for physical hardware systems; lower maintenance costs, and subsequent reduced power and cooling requirements.

Another big advantage is the ability to provide business units with near instant capacity when needed. Server virtualization enables elastic capacity to provide system provisioning and deployment at a moment’s notice. Also, the fact that the operating system is encapsulated means that there are far fewer compatibility problems. By virtualizing and encapsulating the application and its environment, compatibility problems are no longer an issue.

Extending the Virtualization Idea: Containers

The containerization revolution is really an extension of the virtualization approach and is derived from some core capabilities of the Linux operating system developed by Google. In fact, the first real containerization capability was Linux Containers, commonly known as LXC. But it was the emergence of Docker, an open source container initiative, that has given a big boost to this category as a whole, as it has achieved broad industry buy-in for the container approach.

The difference between the two approaches is that containerization takes things a little further. It achieves far greater efficiency than virtualization by eliminating the hypervisor and its multiple virtual machines.

So how does it work?

In the more traditional virtualization model, the hypervisor creates and runs multiple instances of an operating system so that multiple operating systems can be run on a single physical machine sharing the hardware resources.

The container model eliminates hypervisors entirely. Instead of hypervisors, containers are essentially applications, and all their dependencies packaged into virtual containers. Containers contain not just the application, but everything that it needs to run including runtime, system libraries, etc. Each application shares a single instance of the operating system and runs on the “bare metal” of the server. 

All the containers share the resources of the single operating system and there is no virtualized hardware. Since the operating system is shared by all containers, they are much more lightweight than traditional virtual machines. It’s possible to host far more containers on a single host than fully-fledged virtual machines. Another advantage is that containers sharing a single operating system kernel start-up in a few seconds, instead of the minutes required to start-up a virtual machine. Containers are also very easy to share.

The following diagram from Docker explains the difference between containers versus VMs very clearly:

Breakdown of Containers vs. VMs

 

Containerization Beyond Docker

Although Docker is by far the best-known containerization technology, it is not the only product worth considering. New platforms have emerged and, in a sign of maturity and emerging research in the category, the Open Source Container Initiative was formed last year as a nonprofit foundation promoting open standards for container formats and runtimes. Docker is a big supporter, having contributed its runtime engine to the foundation. Google, Amazon, Facebook, IBM and Red Hat are also contributors and supporters.

An extensive list of containerization products is listed on TrustRadius.

Here is a list of five major products in the category, including Docker:

  • Docker: As the lead product in the category, Docker has the advantage of a large user community providing excellent support to users. In addition to this, reviewers on TrustRadius praise the platform for its simple interface, ease of sharing, version control, and well-documented API. On the downside, reviewers complain that debugging can be difficult.
  • Kubernates: Of the alternative products, Kubernetes is arguably the best known. Developed by Google this open source product Kubernetes has become a leading container management platform which can be easily run on AWS or Google Cloud. Kubernetes is highly rated on TrustRadius and praised for its strong built-in configuration capabilities, micro-service containerization, extensive documentation, rapid deployments, and scheduling. However, the product does have a steep learning curve.
  • Core OS and rkt: rkt is a container format and runtime alternative to Docker. It supports a variety of container formats, including Docker. While it is still in the early development phase, some developers feel that rkt is a more secure container technology and goes some way to overcoming the flaws in the Docker container model.  
  • Apache Mesos and Mesosphere: Mesosphere is a software solution that expands upon the cluster management capabilities of Apache Mesos with additional components to manage server infrastructures. Marathon is a production-grade container orchestration platform that runs on Mesos.

Containerization technology is growing rapidly, and adoption is expected to be extremely robust for the next several years. This phenomenal success is due to the fact that containerization solves real IT problems.

While virtualization continues to be a very popular technology with massive adoption, all virtual machine hypervisors are based on hardware emulation and are therefore fat and inefficient. Containers, on the other hand, use shared operating systems which makes them much leaner and efficient. In general, it’s possible to run many more application instances in containers than virtual machines running on the same hardware.

IT administrators worried about hardware bloat in the data center, and DevOps teams worried about application encapsulation and portability, should be looking at this technology now.

While Docker has become almost synonymous with containerization, there are valid alternatives. TrustRadius features reviews of a number of containerization products and is a very good place to begin research into this exploding category.

Was this helpful?

(0) (0)

TrustRadius Weekly