Since its release in 2014, Docker has been widely discussed in devOps communities, and accelerating adoption rates are undeniable. From Microsoft to Oracle, proprietary software companies leverage Docker. And that’s in addition to the countless IT and cloud companies who fuel the growth of the application container market.
But what about using Docker in the developer environment on your single-developer machine?
Would it ease your back-end dev project workflow?
Is it worth the time and effort to learn and use?
Docker is a tool that enhances application workflows by using containers. Containers allow a developer to package up an application, and all of its libraries and other dependencies, in one place–into one deliverable.
It all starts with virtual machines and software. You install virtual machine software like VirtualBox on the host OS and then create a virtual environment sharing your resources (e.g. CPU, RAM, etc.), installing any Guest OS within this virtual space.
Problem: Virtual machines require a lot of space, memory, and other resources.
Solution: Docker takes the same concept of virtualization and comes with an application and file system-level virtualization, rather than hardware-level virtualization. In traditional virtual machines, CPU, RAM, IO, and more are provided as virtual resources to the Guest OS. In Docker, the system kernel file system is provided as virtual resource. Also, the virtual environments in Docker are packaged into these virtual spaces called containers. The Container environment runs in isolation from the actual host OS and other containers.
Docker is available for all major operating system platforms, and the installation process is pretty straight-forward. (Note: If you have old Windows/Mac or Windows home that is not supported by the community edition, use Docker Toolbox which uses a virtual machine to run the containers.) You can install the Docker Community edition from the Docker website.
Following installation, run an nginx server (nginx is a simple web server with load balancing and reverse-proxy features).
docker run -d -p 8000:80 nginx
With this command, Docker checks if it has the container source. If the source is missing, then it is downloaded from the repository (default repo is Docker Hub). The -d option has Docker run this in the background and the -p option sets the ports to expose 8000 (if you don’t expose the port, then you cannot access the endpoint). Additionally, if you do not specify a name, a random name will be generated.
Now open localhost:8000 in your browser, and you can see the default nginx page:
If you want any DB for your project then just go to Docker hub and find the official image for the DB you need and run it.
To see the running process, use this command:
It’s important to note that the command can grow longer if you have a complicated setup, so Docker has Dockerfile which allows you to define your infrastructure code. This saves you from spending time configuring your tech stack through manual steps and command line.
For example a simple node JS application can be executed in a nodejs container like this:
Once the image is created, you can use the docker images command to see all the images on your computer:
Now you can push this image to any QA/Staging/Production environment, and it will run.
For the development environment, in addition to Dockerfile, you can write docker-compose file to compose all the images required to run the project from databases to image-processors in one file. Then you can start or stop all at once.
The Docker compose file with node and its dependency would look like this:
This file has node and mongo DB configured with all the ports and environment on which they will run.
To start use command:
To stop use command:
It takes a little bit of work setting Docker up for the first time, but once everything is in place, it is so easy to start/build/deploy your project. Docker continues to build momentum within app development, and it will be exciting to see how it evolves and adjusts to accommodate dev trends.