Since its release in 2014, Docker has been widely discussed in devOps communities, and accelerating adoption rates are undeniable. From Microsoft to Oracle, proprietary software companies leverage Docker. And that’s in addition to the countless IT and cloud companies who fuel the growth of the application container market.
Docker is a tool that enhances application workflows by using containers. Containers allow a developer to package up an application, and all of its libraries and other dependencies, in one place–into one deliverable.
It all starts with virtual machines and software. You install virtual machine software like VirtualBox on the host OS and then create a virtual environment sharing your resources (e.g. CPU, RAM, etc.), installing any Guest OS within this virtual space.
Problem: Virtual machines require a lot of space, memory, and other resources.
Solution: Docker takes the same concept of virtualization and comes with an application and file system-level virtualization, rather than hardware-level virtualization. In traditional virtual machines, CPU, RAM, IO, and more are provided as virtual resources to the Guest OS. In Docker, the system kernel file system is provided as virtual resource. Also, the virtual environments in Docker are packaged into these virtual spaces called containers. The Container environment runs in isolation from the actual host OS and other containers.
What are the benefits of using Docker?
First and foremost, Docker solves the“but it works on my machine” problem. If you are able to run the container with your code, then you will be able to run it anywhere else–it’s that simple.
Docker alleviates the pain of installing, maintaining, and setting up dependencies, like databases. You can spin the dependencies at will, and cleaning them becomes a much smoother process.
With faster shipment, Docker lets build your code to a container once and ship it to multiple environments like dev, QA, staging, 0r production–saving time and resources.
When it comes to newly on-boarded developers, Docker is easy to set up and manage because you don’t have to go through manual set-up.
How to set up Docker:
Docker is available for all major operating system platforms, and the installation process is pretty straight-forward. (Note: If you have old Windows/Mac or Windows home that is not supported by the community edition, use Docker Toolbox which uses a virtual machine to run the containers.) You can install the Docker Community edition from the Docker website.
Following installation, run an nginx server (nginx is a simple web server with load balancing and reverse-proxy features).
docker run -d -p 8000:80 nginx
With this command, Docker checks if it has the container source. If the source is missing, then it is downloaded from the repository (default repo is Docker Hub). The -d option has Docker run this in the background and the -p option sets the ports to expose 8000 (if you don’t expose the port, then you cannot access the endpoint). Additionally, if you do not specify a name, a random name will be generated.
Now open localhost:8000 in your browser, and you can see the default nginx page:
If you want any DB for your project then just go to Docker hub and find the official image for the DB you need and run it.
To see the running process, use this command:
docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cf927d64e40b nginx "nginx -g 'daemon ..." 11 minutes ago Up 11 minutes 0.0.0.0:8000->80/tcp focused_mcnulty
To stop the server:
docker kill focused_mcnulty
It’s important to note that the command can grow longer if you have a complicated setup, so Docker has Dockerfile which allows you to define your infrastructure code. This saves you from spending time configuring your tech stack through manual steps and command line.
For example a simple node JS application can be executed in a nodejs container like this:
FROM node:8.4WORKDIR /usr/src/app ADD . /usr/src/appCMD npm start
Let’s go through the file line by line:
FROM node:8.4 says this image will run on top of node:8.4 .
WORKDIR /usr/src/app says make this my working directory, and run the next commands here.
ADD . /usr/src/app says copy all the contents from my current directory in host OS to the /usr/src/app directory of the container.
Next, you need to make an image out of this config we just wrote.
docker build -t nitinsreeram/my-node-app .
Once the image is created, you can use the docker images command to see all the images on your computer:
docker imagesREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE nginx latest 07f8e8c5e660 2 weeks ago 188.3 MB nitinsreeram/my-node-app latest 2ac5d95f10cc 4 hours ago 123 MB
Now you can push this image to any QA/Staging/Production environment, and it will run.
For the development environment, in addition to Dockerfile, you can write docker-compose file to compose all the images required to run the project from databases to image-processors in one file. Then you can start or stop all at once.
The Docker compose file with node and its dependency would look like this:
This file has node and mongo DB configured with all the ports and environment on which they will run.
To start use command:
To stop use command:
It takes a little bit of work setting Docker up for the first time, but once everything is in place, it is so easy to start/build/deploy your project. Docker continues to build momentum within app development, and it will be exciting to see how it evolves and adjusts to accommodate dev trends.