Skip to main content

Command Palette

Search for a command to run...

Diving Into Docker (Part 1): The Big Picture

From Fundamentals to Orchestration

Published
10 min read
Diving Into Docker (Part 1): The Big Picture
D

I'm a mobile/web developer 👨‍💻 who loves to build projects and share valuable tips for programmers

Follow me for Flutter, React/Next.js, and other awesome tech-related stuff 😉

Introduction

So, I've been reading this book called Docker Deep Dive Zero to Docker by Nigel Poulton. Why? Because I've been working on mobile, web, and cloud applications for quite a long time now. And I really wanted to know how Docker works internally and how it does the magic.

And then I came across this book and thought to give it a try because I love reading, and to be honest, I hardly read technical books to learn about something personally (well, except college and study books, of course lol). So while reading it, I was also making notes for myself.

I will be sharing everything I learned from this book. I will be using the book to explain and rewrite things, making things simpler to understand, because some of the things later on in this book took a while for me to understand. I will also be using Nano Banana to create visuals to help you better understand because I don't want to waste time on graphics too much. And obiously AI does better job than me now in designing these things ;)

So, let's get started without further ado.


The Big Picture

A few years ago, running apps looked very different. Most companies ran one application per server. If the business needed a new app, the IT team often had to buy a new server. This cost a lot of money and wasted resources, because many servers had unused CPU and RAM.

Then virtual machines (VMs) became popular.

VMs Solved One Problem, But Created Another

VMs made it possible to run multiple applications on one server. That was a big improvement. But there was a downside too, which were

  • Each VM needs its own full operating system.

  • That OS uses CPU and RAM even when the app is small.

  • Sometimes each OS also needs its own license.

  • VMs can be slow to boot.

  • Moving VMs across machines is not always smooth.

So even though VMs helped, they still had a lot of overhead.

Containers: A Lighter Way To Run Apps

To solve the above problem, containers come into the picture. What is a container? For now, just think that a container is similar to a VM in one way. It runs an application in an isolated environment.

But the key difference is that Containers share the host machine’s kernel. They do not need a full OS per container

And because of this, containers are Faster to start, more lightweight, and Easier to move around, which was the problem in VMs, remember?

Google has used container tech for a long time. But for many companies, containers were still too complex to use directly. To address this complexity and make things easier for everyone, Docker Inc. began developing Docker.


What Does “Docker” Mean?

When people say “Docker”, they might mean two things:

  1. Docker, Inc. - the company

  2. Docker, the technology - the tool that creates and runs containers

Docker, the technology runs on Linux and Windows, and helps you build, run, and manage containers.


Docker Architecture

There are three things that we need to be aware of when we talk about Docker.

  1. Docker Runtime

  2. Docker Daemon

  3. Docker Orchestrator

Let's see each one of them in detail now.

Docker Runtime

Docker uses a “tiered runtime” setup. Low-level and High-level runtime.

When I say Docker uses a tiered runtime architecture, I mean that Docker splits responsibilities between different components instead of having one big program do everything.

Low-level runtime: runc

  • This talks to the operating system. Starts and stops containers. Each container typically has a runc instance managing it.

High-level runtime: containerd

  • This manages the whole container lifecycle. It pulls images. Sets up networking and calls runc when needed

You do not need to memorize this on day one, but it helps to know Docker has layers.

Docker Daemon

Docker Daemon sits above containerd and performs higher-level tasks, such as exposing the Docker remote API, managing images, volumes, and networks, and more.

The Docker daemon’s main job is to provide an easy-to-use standard interface that abstracts the lower levels.

Docker Orchestrator

Before we understand this layer, we need to understand what orchestration means.

You see, running one container is easy. But in real apps, you often need many containers, like web apps, databases, caches, and background workers.

Orchestration means managing all of those automatically. If you have seen any performance in an orchestra where the orchestrator manages all the musicians, the same concept applies to the Docker too.

  • The orchestrator, Start the containers, restart if they crash, scale up when traffic increases, and connect them properly

Docker has its own orchestrator called Docker Swarm, but today most teams use Kubernetes.

Kubernetes in one line (because we are not going into the details right now)

Kubernetes is the most popular platform for deploying and managing containerized apps.


OCI: Why Standards Matter

There is also something called the Open Container Initiative (OCI). In simple terms, it's nothing but a government council that sets standards for the image and runtime formats.

An analogy often used to describe these two standards is rail trails.

This is useful, and it’s fair to say that the two OCI specifications have had a major impact on the architecture and design of the core Docker product.


Installing Docker

Docker is available everywhere to download. There are Windows, Mac, and Linux applications. You can install it in the cloud, on premises, and on your laptop. And there are manual, scripted, and wizard-based installs, too. There literally are loads of ways and places to install Docker.

Docker Desktop

This is the main application we use for everything. Go to Docker Desktop website and download it on your respective operating system. Once installed, you will see the interface something like this:


Docker’s Main Parts

So when you install Docker, you mainly work with two pieces: Docker Client and Docker Daemon

1) Docker Client

This is what you interact with, usually from the terminal.

Open your terminal and run the command below. You will see the current Docker version installed in your system.

Example:

docker version

2) Docker Daemon

This is the background service that does the real work:

  • pulls images

  • runs containers

  • manages networks and volumes

  • exposes the Docker API

The client talks to the daemon. This is the high-level explanation. Now, let's get into the details


The Two Core Docker Objects: Images and Containers

Docker Image

A Docker image is like a package that contains:

  • a small filesystem like Linux files, or it can be your app, or the dependencies needed to run the app

The important thing here is that it's an image, not a running instance. It is more like a blueprint. You can think of a Docker Image as a Class in the world of programming. And when you create an Object of that Class, it becomes a real thing and gets memory allocated in the system. In this case, that object is Container.

Getting an image onto your Docker host is called pulling. So whenever you want to get the image, you run the command below,

docker image pull ubuntu:latest

And from where this image will come from, you might ask? We will talk about it in the upcoming article, don't worry. For now, just know that it comes somewhere from the internet. And so, after pulling the image, you will see something like this.

To check whether you successfully installed that image, there are two ways: First, to run the command below in your terminal

docker image ls

Each image gets its own unique ID. When referencing images, you can refer to them using either IDs or names

And the second way is to check the Docker Desktop

Docker Container

A container is a running instance created from an image.

So:

  • Image = blueprint

  • Container = running app

Now that we have an image pulled locally, we can use the Docker container run command to launch a container from it.

docker container run -it ubuntu:latest /bin/bash

Let's understand the command here first:

docker container run tells the Docker daemon to start a new container.

-it flag tells Docker to make the container interactive and to attach the current shell to the container’s terminal.

ubuntu:latest command tells Docker that we want the container to be based on the ubuntu:latest image.

/bin/bash tell Docker which process we want to run inside of the container. For linux we are running a Bash shell.

As you can see, after running the command, we got inside the Ubuntu container. And it's a real Ubuntu image. So you can run the command you run on Ubuntu. But not all of them right now. Why? We will come to it later, don't worry.

You can exit the container without terminating it by Ctr - PQ command

If you want to see all the containers, you can run

docker container ls

Now, if you have exited the container, you can again get back into it using the exec command

docker container exec -it 0acc2d5f2fef bash

We used the -it options to attach our shell to the container’s shell

We can stop and kill the container using the commands below

docker container stop <name/id>
docker container rm <name/id>

IMPORTANT NOTE:

  • If you are like this right now...

https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExcjE1MmVtZ3o4bXF1NzJ2aXMzZ3prYmd1dWU3NnY3ZmZ5ZDcwZ3pjbiZlcD12MV9naWZzX3NlYXJjaCZjdD1n/GPg6PuL5RkeyVCD5wk/giphy.gif

* Then I would like to say that we just went through the fundamental pieces of Docker in this article. If you didn't understand anything yet, then don't worry. I totally get you. It's hard to grasp all of these at first. But it was necessary for me to give you the big picture about it. So that, when we discuss it in detail, you will know something about it and not feel scared.

Conclusion

We learned about VMs and saw why it's not efficient to use them anymore. We saw how containers can be so much lighter and faster than VMs.

We also saw what Docker is and how it works using its architecture. We learned about OCI and how it helps maintain the standard practice of images and containers.

We also went through some basic commands of Docker, like:

docker image pull <image-name> to pull an image
docker image ls to list all the installed images
docker container run -it <image-name> <app-name> To start and run the container
docker container ls To list all the installed containers
docker container exec -it <container_name> bash To attach the terminal to the running container's terminal
docker container stop <nameid> To stop the running container
docker container rm <nameid> To remove/kill the container

This big picture view should help you with the upcoming article, where we will dig deeper into images and containers.

See you in the next article, until then....

https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExbjFtZG8yY3V6azR5ejRoY2p4eXltcWdzcDVrMXVhcjlzdnVrcG5zayZlcD12MV9naWZzX3NlYXJjaCZjdD1n/2nlbKhgnvAK3sR8ffw/giphy.gif