Have you faced issues with the application that you created, that runs perfectly on your machine but fails to do so when you pass it on to someone else? Why is that so? Why does the application that you created work on some machines and doesn't on other machines?
The simple answer to that is the version of the packages and dependencies that are used to create that application in your machine are not identical to the machine which is running the application when you pass it on. For example, you are using some technology with a version of v18.0.1 in your application. But your friend has version v16.0.6 on his machine, which might result in the application not performing up to the mark on his machine.
What if there was a way to pack every dependency that is required in your application into one entity that can run on other machines as well? That is where Docker can help you ship your code fast and give you control over your applications.
What is a container?
A way to package an application with all the necessary dependencies and configuration. That package is also portable just like any other artifact, you can easily share and move that package around. The portability of containers and everything packaged in one single entity which is in an isolated environment makes development and deployment much more efficient.
How are containers created?
Containers are created from an image. Image stores all the information required to make a container. You can find various images of different projects from the public image repository on DockerHub. By downloading the image of a particular project, you can make a container out of it.
Ways containers help in improving the development process.
Application development before containers.
Before containers, a team of developers had to install most of the services to their operating system. This might become a tedious task considering there might be various steps and if the application is very big, they have to install the dependencies in various operating systems multiple times. Hence the chances of something going wrong are very high.
Application development after containers.
After containers, there is no need for a developer to go look for various binaries to download, rather you can just download the image of a specific container you want to run and it will come with all the dependencies and configuration pre-set. All you have to do is use one docker command to download and run the image to run the container. Also depending on your operating system, the commands won't differ. This makes setting up your local environment much easier and much more efficient than the previous method. Another benefit of using containers is that we can run different versions of the same technology without having any conflict between them.
Container
Now that we know the concept of a container, let's look at what a container technically is. As we have seen previously, a container is built from an image. So the container is nothing but layers of stacked images on top of each other. At the base of most containers, we would have a Linux base image such as some specific version of Alpine or some other Linux distro. This is to keep the image size small. On top of the base image, you would have some intermediate images that will lead up to the application image that would run in the container.
Docker Vs Virtual Machines
To understand how docker and VMs work on the operating system level, we need to understand what our operating system is made up of.
As you can see there are two layers, the OS layer is responsible to communicate with the CPU, memory, etc and the application runs on the kernel layer.
Now as we know that VMs and Docker are virtualization tools. So the real question is, what part of the operating system they virtualize?
Docker virtualizes the Application layer. while VMs virtualizes the whole operating system layer. That means when you download a VMs image on your host, it will not use your host OS, it boots its own OS.
Hence, there is a big difference in the size of the images between the docker container(mostly in MegaBytes) and VMs(mostly in GegaBytes). Second is the speed, VMs take time to boot since they have to start the whole operating system and load applications in it. While that's not the case with Docker containers. The third difference is compatibility, you can run a VM image of any OS on any other OS host, but you can't do that with docker. If the host OS is Windows base and you try to run Linux based docker image, it might not be compatible with the Windows kernel. This is in fact true with older MacOS versions and Windows versions below 10. You can find more about the compatibility in the official documentation.
Basic Docker Commands
First, we will go to DockerHub and find some images to download as an example.
docker pull
pull command will download the image to your local machine.
You can also use :
to specify a particular version of Redis to download.
Here if you look closely, you can see some hashcode and besides that, it says "Already exists". Those are the images that were already downloaded when we previously downloaded the Redis image. This is one good thing about docker, you don't have to pull same images again and again. You just have to download an image once and use it in all images if required.
docker images
It will list down all the images you currently have.
docker run
By using this command you can run the image
But if you notice that we didn't exit the command by default. This means when you run an image, by default it will run on attach mode. If we press CTRL+C
, this will stop the container.
To run the image in detach mode we can add -d
flag to our command.
This will exit the command but the container will keep running in the background.
docker ps
This command is used to see all the docker containers that are currently running.
You can also add -a
tag to see all the current and previous running containers.
You can also give a custom name to your container with the help of --name
tag.
But mind that, after using this tag it will create a new container. It won't update the name of the older container.
docker start
You can start an already-created container with the help of this command. Whenever you do docker run
it will create a new container. but if you do docker start
followed by the full id or first two letters of an id or name of the container, it will start the container.
docker stop
This will stop the running container.
docker rmi
With the help of this command, you can remove an image. Sometimes if a container is in use, it might show an error. But after adding -f
tag which stands for "force", it will be deleted and can't be retrieved back.
docker rm
With the help of this command, you can remove a container.
-p Tag
As you can see, we have 2 containers running and both of them are bound to port 6379 by default. But this is not possible since if both containers are bound to the same port it will have conflict between them. To avoid this we need to bound the post from our host with the container's port.
To do that we need to use -p
tag and define a port we need the output at and then on the right side the container's port that we want to bound it to. Also since we are doing docker run
it will create a new container and run it instead of updating it.
docker logs
With the help of this command, you can troubleshoot your container in case something goes wrong.
docker exec
With the help of this command, we can navigate the terminal of the running container.
The -it
flag stands for interactive mode. As you can see we can fully navigate in the container.
You can also exit the container's terminal by just typing exit
.
What next?
In the upcoming blog on docker, we will look at the docker architecture and make our image and deploy it on docker hub. Till then You can explore more about Docker on Docker's official documentation at DockerDocs.