Finally understand how Docker works in 5 essential ideas
Have you ever tried using Docker just to realize how hard it was to get started? I have, but by trying out commands I finally figured a few things out. Here are the essential ideas that will help you understand Docker.
Docker is about building apps that always run the same
If you work in a team whose members use different computers (as they usually do), you might have stumbled upon bugs in your application that appeared only on your machine, due to different configurations. For example, Windows and Linux users don’t have access to the same shell commands, or different Linux distributions support distinct versions of certain packages.
Docker was created to address this problem: its goal is to create an environment that can run in the same way, no matter what the underlying architecture is.
It does so by using images, templates that tell Docker which environment you would like to use.
An image is built layer by layer
Suppose you want to create a reusable environment (ie to “Dockerize”) a simple React app. The first step is to create a Dockerfile file (with no extension), which will explain to Docker how to build our image.
The power of Docker lives in the reusability of images, all available on the Docker hub. By using existing images, you save yourself the pain of creating an environment from scratch.
For our app, we can use the latest version of the node image.
The first instruction we add to our Docker file tells it to fetch this image, and to build our image on top of it. Since the node image is already built as another image with additional functionality, we say that Docker images are layered. This allows a few developers to work on efficient core images (like Linux distributions), while most developers only add functionality to them (installing node, then npm).
So, let’s create our Dockerfile!
FROM node:latest
COPY . .
RUN npm install
CMD ["npm", "start"]
The code is quite self-explanatory, except for the COPY instruction. It tells Docker to copy your entire working directory to the image, so it can use your files in your new environment.
The RUN command (installing dependencies) will be executed when the image is built, and the CMD command when we decide to run it.
A container is a running instance of an image
When we decide to run our image, Docker creates an instance of an environment from the template we gave it (the Dockerfile + the previous instructions we pulled from the node image).
This instance is called a container. A benefit of containers is that you can run commands in their terminals, allowing you to explore the file system and understands where your bugs are coming from.
Compose containers to create systems
If you want to create a complex app, with a database, a backend and a frontend, you will need to run all your images at the same time, and also allow them to communicate between each other. To do this, we would like to compose their containers. To declare how they should interact, write your code in a docker-compose.yaml file (see the official docs to learn about its syntax).
There are only a few commands you will need often
Being are a Docker neophyte, I have used the Docker Desktop program a lot. From it, you can see your running images and containers as well as run and compose containers.
You might, however, need these commands:
- To build an image from a Dockerfile in the current directory:
docker build .
- To run an image called my-backend:
docker run my-backend
- To compose containers:
docker compose up
These should help you get started, and you will learn how to customize them to suit your needs in no time.
Thanks!
Still reading? You must be pretty good at paying attention!
I would love to hear your feedback on my other projects, so feel free to look at my GitHub ;)
Thanks for reading and happy coding!