Containers? Do we really need them?

Thilina Viraj
4 min readApr 15, 2019

Hello there! After a while,

This time I’m going to evaluate whether we actually need to containerize our applications or can we move forward in the VM/bare metal environment. Since it’s more biased towards my experience and the facts, would love to hear your inputs as well. Do comment below :)

Earlier days application runs on a dedicated machine/server and now most of our enterprise applications are running on VMs. In my case, 70% of applications running on our internal data centers itself. We do buy third-party server spaces and host our applications as well. We have already passed bare metal application era and now we are running on VM based application deployments.

There are many levels of virtualization. But I just consider below model which is available on most of the existing platforms/cloud providers. It just runs on the hypervisor as an isolated platform.

source: insights.sei.cmu.edu

Above infrastructure was able to feed us for many years. It helps in many ways such as financially, uptime, DR and fast deployments than bare metal approach. Since it’s an isolated VM environment it provides more security than bare metal machines. But same time, it is consuming more processing power and memory.

When the business is getting larger, you may need to scale up the infrastructure and facilitate requirements for smooth operations. You can simply create a replication/clone from an existing setup and get the services extended.

Even though you have an increased number of servers it will be ended up giving more burden to administrators and operational teams. Hence people looking for solutions where they can manage their enterprise application and find out how to deploy new versions without any interruption to their production environment.

Finally, there’s a concept called “Containers”. It can be identified as the next level of basic virtualization approach. If we consider the pros and cons between VMs and containers, infrastructure and packaging are much easier than VMs. But it runs on the base engine. In docker, we do name it as docker engine.

Below the diagram, I grabbed from one of the posts published by Gitlab, which clearly shows the differences between each approach.

Source: about.gitlab.com

Since containers do share the same kernel it may cause some security concerns in native level. But most of the service providers have identified these vulnerabilities prior and tried to address them in solutions. In addition to that cgroups and namespaces are also there to identify and control resources.

Managing containers are much easier than managing multiple VMs. You can simply create destroy and modify containers. But if we consider the other way around, this might cause many problems. If an attacker is able to access them as an authentic user, he can simply modify/remove containers. Ultimately the attacker can destroy your information much easier than VMs. People do try to access these problems by having separate FW rules defined and monitoring mechanisms for their engines/containers.

If we categorize containerization based on the solution providers, below can be found

  1. Docker
  2. Open VZ
  3. Rocket
  4. Virtuozzo

When should I apply these technologies?

From my point of view, If your business team does have dynamic requirements and you do have 2–3 deployments every week, moving to the containerized environment would be the ideal solution. If you really want to do fast, Continous integration/deployments, the containerized environment will be able to remove a huge burden from your tech teams. But, if you have only 1–2 deployments per 6–7 months and you are not getting high volume of traffic dynamically (Difficult to trace and add/remove HW from VMs) you may still work on the well-known VM infrastructure.

Migrating to the containerized environment from VMs will cost you a considerable amount of money and most importantly you need to have a healthy engineering muscle as well. Above points may vary from each of your business/organization. Would love to hear comments related to your organization. Do drop a comment below :)

Since docker is the mostly using platform, here onwards I’ll discuss docker containers. It’s a platform can be used by developers/sysadmins to develop, deploy, and run applications with containers.

Containerization is increasingly popular because containers are:

  • Flexible: Even the most complex applications can be containerized.
  • Lightweight: Containers leverage and share the host kernel.
  • Interchangeable: You can deploy updates and upgrades on-the-fly.
  • Portable: You can build locally, deploy to the cloud, and run anywhere.
  • Scalable: You can increase and automatically distribute container replicas.
  • Stackable: You can stack services vertically and on-the-fly.

I thought of sharing the above properties and information about containers. Because it simply explains the capabilities of docker. Just copied them from Docker official website. :)

In addition to that below are some keywords that may be useful.

Docker Images - Assume, you are going to create a container using application. It contains the file system and configurations related to your applications.

Containers - Can be considered as a running instance of docker images. The container includes the application and all dependencies.

Docker daemon - It’s a background service that runs on the host to manage the life cycle of Docker containers.

Now below image will give you a better understanding of how a docker container works.

Source: docker.com

That’s it for the day. I’ll discuss how we can containerize our applications on docker in the next blog post.

Do post your thoughts below :)

--

--