Docker and Its Ecosystem

Kailash Verma
6 min readApr 13, 2021

Docker is now widely used by leading industries for easy and fast deployment of application packages with all of its dependencies and configurations. Before we discuss more about docker, let’s look back into time when we were not following DevOps practices and docker came into picture, we were having following pain areas :

Portability on Different Dev, Test, Prod environments : Due to multiple environments, we faces a common problem of “It Works On My Machine, why not there ?”. Manual process of configuring environments might lead to missing configurations, dependencies or packages, resulting to application breakdown.

Golden Images : Higher cost of storage and maintenance due to larger golden images with operating system and application configuration.

Higher Cost : Higher cost at machine/virtual machine and underlying hardware like CPU, RAM. The management of machines at larger scale was not easier.

Manual Release Process : The manual release process was time taking and effecting the productivity of engineers. Also effecting the Highly availability and manageability of applications on live environments.

What is Docker ?

Docker is a container platform helping us to automate and package application’s dependencies and configurations into Linux Containers (LXC).

  • Containers bundle their own Software, libraries and configurations
  • Container run anywhere
  • Same container can run on any laptop, desktop, Virtual Machines, Cloud & more
  • Unlike virtual machines, it does not include a separate operating system.

The custom configuration of application packages can be done by Dockerfile. Writing Dockerfile is very easy, once we have Dockerfile ready, we can create Docker Image. Docker image can be shared and shipped to multiple environments using Docker Registry (Docker hub, AWS ECR, Private registry etc.) . One Docker image can be executed on many servers as Containers. Now lets discuss about Dockerfile, Image and Containers in detail.

Docker Container vs Virtual Machine

The left side of image shows architecture of a Virtual Machine deployed on a Host server. Once in a while, everyone of us have used Virtual Machines. Virtual machines required a Hypervisor like Oracle Virtual Box, VMWare, Hyper V etc. Over that hypervisor, we have to install an guest Operating System which required higher RAM and disk space (Gigabits of disk for OS and RAM allocated to one VM). To manage those virtual boxes, host OS also required RAM and disk space. Overall we required a very high end Host system for an efficient and fast performance of application deployed on virtual box. There comes limitations, higher cost, lower efficiency and more time for configurations and system readiness.

The right side of image shows the architecture of a application deployed on Docker over a Host Server. Docker containers don’t require a separate Operating System. It just the bins and libs for the OS configured on Dockerfile as base image and these bins/libs uses the Host OS Kernel for all of its functionality. Size of bins/libs are in Megabits. The Docker Engine is deployed on Host operating system. Docker engine does the work of a hypervisor here for communication between Host and Containers. Hence, less disk space and RAM as no OS is required inside a container as compared to resources required by VMs. Due to this, lower cost on system resources, easy and fast deployments and configurations.

Why Docker?

  • Small Disk/CPU requirements as compared to VM’s
  • Portability : A Docker image can be shipped easily on multiple platforms.
  • Supported by all Cloud providers
  • Easy to create image by Dockerfile

Dockerfile, Image and Container

When we build a Dockerfile, we get a Docker Image. Image can be shipped anywhere. When we run the Image with required parameters and environment variables etc, it becomes a container (a process). In short, we can say, Image is an passive entity, while container is an active entity.

Dockerfile

Sample Dockerfile for reference

Dockerfile should be started with a base image (FROM). This base image will the bins/libs of OS we have mentioned, in this case its Ubuntu 14.04. The CMD will be executed at run time for service startup on container. Please refer the docker documentation for more details on commands available for custom configuration.

Containers

Before Docker came into existence, containers (LXC) were used by many reputed organizations. Docker made it easier by providing a wrapper for Linux Containers.

What is Container ?

  • “Unit of deployment”
  • LXC — OS virtualization
  • Lighter than VM/Golden Images

Why Containers ?

  • Boots in seconds
  • 100–1000 containers on one Machine/VM
  • Same Container can be deployed to any Dev, Test & Production servers

The Matrix From Hell

The problem of managing application on multiple environments with multiple dependencies.

One Docker Container can be used on multiple environments with all of its dependencies and configurations.

Docker Eliminates the Matrix From Hell

We don’t need to build our applications every time for each environments. Build once, Ship anytime, Run Anywhere easily..

Docker Ecosystem

The diagram above contains the ecosystem of Docker. There are numerous other tools and technologies providing us platform to deploy and manage our applications on Production like environments.

Docker Storage

  • All data inside image is read only and stateless
  • Once container is deleted, all associated data also gets deleted
  • It can be somehow updated/stored by Volume Mounting
  • Command : docker run –v data_voume:/var/lib/ <mysql_image>
  • This will save container data inside data_volume folder of host

Docker Compose

What if you want to run 50 containers at a time ? Obviously we will not run docker run command 50 times :) . There comes use of docker compose. Docker compose is used by writing all configurations of in yaml file. Below is the example of a docker compose file :

  • Compose is a tool for defining and running multi-container Docker applications
  • Build, run, stop multiple containers at a time with your configuration
  • YAML file is used to configure application’s services
  • Command : docker-compose –f <file_name> <options>

Docker Commands

To get started with docker, below are list of few commands. Start playing with containers and enjoy a new journey on virtualization.

Docker value addition

Lets summarize on Docker value addition and how it provides solutioning for the pain areas we discussed earlier :

  • Same package for multiple environments — QA, Stage, Production
  • Faster to create, manage and taking snapshot of Environments
  • Portability of Apps
  • Easy to build, test and deploy Apps
  • Deployment automation, Faster and more releases
  • Lower cost on storage & CPU
  • Zero downtime : Supports multiple release strategies.
  • Inexpensive : Open source, Supported with out-of-the-box modern Linux kernel
  • Ecosystem : Prebuild Images and Apps available in Docker Hub, Vibrant community & numerous 3rd party app integrations, supported by All Cloud providers — AWS, VMware, Google App Engine support Docker, Azure

There are so many other features and functionalities available on Docker. Please refer the link mentioned below for more details :

--

--

Kailash Verma

DevOps Consultant | Cloud Engineer | Security | CI/CD | HA | AWS | Docker | Kubernetes | Aerospike | Cassandra | Rabbitmq | Consul | MongoDB