Skip to main content

Command Palette

Search for a command to run...

Diving into Docker (Part 4): Containerizing an app

A Step-by-Step Guide to Packaging and Running Applications with Docker

Published
9 min read
Diving into Docker (Part 4): Containerizing an app
D

I'm a mobile/web developer 👨‍💻 who loves to build projects and share valuable tips for programmers

Follow me for Flutter, React/Next.js, and other awesome tech-related stuff 😉

Docker is all about taking an application and running it inside a container. As we saw in the previous blog, a container is a lightweight package that has your app plus everything it needs to run. This makes it much easier to build, ship, and run the same app on different computers without the it works on my machine problem. That's the whole point, right?

When you containerize an app, you are basically preparing it so Docker can turn it into an image, and then run that image as a container.

What does “containerizing” actually mean?

Think of containerizing like packing your app into a travel bag:

  • Your app code is inside the bag.

  • Your app dependencies, like Node.js packages, etc, are inside the bag.

  • The instructions on how to start your app are also inside the bag.

Once the bag is packed, anyone can run it the same way, as long as they have Docker.

The usual flow of containerizing an app

Most Docker container workflows follow the same steps:

First, you start with your application code and whatever it depends on (libraries, packages, config files). Next, you write a Dockerfile. This Dockerfile is a set of instructions that tells Docker how to build your app into an image.

After that, you run a Docker build command. Docker reads your Dockerfile line by line and creates an image. Once you have an image, you can store it in an image registry (optional, but very common). Finally, you run a container using that image.

So the flow is:

You have code → you create a Dockerfile → you build an image → you push it to a registry (this is optional) → you run a container.

Example: containerizing a simple single-container app

Let’s say you want to containerize a small Node.js web app.

You copy the code from GitHub and go into the project folder:

Inside that folder, you’ll find the Dockerfile.

What is a Dockerfile?

A Dockerfile is the starting point for creating a container image. It describes:

  • What base system should your app start from

  • What software needs to be installed

  • What files to copy into the image

  • Which command should run when the container starts

One important detail: the folder that contains your app code and files you want to copy into the image is called the build context. Most people keep the Dockerfile at the root of this folder, so it is easy to build.

Also, the name matters: it must be exactly Dockerfile with a capital D, and it must be one word.

Walking through the example Dockerfile

Here is the Dockerfile from the page, and what each line is doing:

The first line is:

  • FROM alpine

This means: start with Alpine Linux as the base image. Alpine is a very small Linux image, so it’s popular for small containers. Every Dockerfile must start with a FROM instruction because it sets the base layer.

Then we have:

  • LABEL maintainer="dhruv.nakum25@gmail.com"

A label is metadata. It doesn’t install anything or change files in a big way. It’s just extra information added to the image. Adding a maintainer label is a nice practice because people know who to contact about the image.

Next is:

  • RUN apk add --update nodejs nodejs-npm

This installs Node.js and npm in the image using Alpine’s package manager apk. This step creates a new image layer because it adds software to the image.

Then:

  • COPY . /src

This copies everything in your build context (your current folder) into the image at /src. This also creates a new layer, because you are adding files to the image.

Next:

  • WORKDIR /src

This sets the working directory inside the image. From this point onward, commands run as if /src is the current folder. This is usually metadata, not a layer that adds content.

After that:

  • RUN npm install

This runs npm install inside /src (because of the WORKDIR). It installs dependencies listed in package.json. This creates another new layer because it adds installed packages into the image.

Then:

  • EXPOSE 8080

This tells that the app listens on port 8080 inside the container. This is important for humans and tools, but it’s metadata, not a layer.

Finally:

  • ENTRYPOINT ["node", "./app.js"]

This tells Docker what command should run when the container starts. In this case, it runs the Node app. This is also metadata.

Building the Docker image

Once you have the Dockerfile, you build the image using the build command. In this example, the image is called web:latest.

The command is:

  • docker image build -t web:latest .

The dot (.) at the end is very important. It tells Docker use my current folder as the build context.

After the build finishes, you can list images to confirm it exists:

  • docker image ls

You can also inspect the image to see what settings Docker stored from your Dockerfile:

  • docker image inspect web:latest

This is a good way to confirm things like the entry point and the image layers.

Pushing images to a registry (Docker Hub)

Pushing is optional, but it’s very useful. If your image is only on your laptop, no one else can pull it easily. If you push it to a registry, you can download and run it from anywhere.

Docker Hub is the most common public registry and is the default place Docker pushes to.

First you login:

  • docker login

Before pushing, you need to tag the image properly. Docker needs three parts:

  • Registry (example: docker.io for Docker Hub, even if you don’t write it)

  • Repository (often includes your username)

  • Tag (like latest, v1, etc.)

If you try to push web:latest directly, Docker will attempt to push it to a repository named web, which you probably don’t own.

So instead, you tag it with your Docker Hub username (Docker ID). Example from the page:

  • docker image tag web:latest dhruvnakum/web:latest

This does not remove the old tag. It just adds another tag pointing to the same image.

Now you can push:

  • docker image push dhruvnakum/web:latest

Once it is pushed, you can pull it later from anywhere.

Running the container

After the image is built, you run it as a container.

Example command:

  • docker container run -d --name mycontainer -p 80:8080 web:latest

Let’s break that down simply:

  • -d means run in the background (detached mode)

  • --name c1 gives the container a friendly name

  • -p 80:8080 maps port 80 on your computer to port 8080 inside the container

  • web:latest is the image you want to run

So if your app listens on 8080 inside the container, you can visit http://localhost in your browser because port 80 on your machine is connected to port 8080 in the container.

A closer look: layers vs metadata

Docker reads the Dockerfile from top to bottom, one line at a time. Some lines create real filesystem layers, and some lines only create metadata.

In general:

  • Instructions like FROM, RUN, and COPY create layers because they add or change files.

  • Instructions like WORKDIR, EXPOSE, ENV, and ENTRYPOINT usually add metadata instead of creating layers.

If you want to see the image build steps and layers, you can use:

  • docker image history web:latest

And to verify the final result including entrypoint and layers, use:

  • docker image inspect web:latest

Why image size matters

When it comes to Docker images, big is bad.

Smaller images are usually:

  • faster to build

  • faster to push and pull

  • easier to store

  • less risky

A common problem is that we install build tools and temporary dependencies, but then we forget to remove them. That makes production images bigger than they need to be.

Multi-stage builds solve this in a clean way. A multi-stage Dockerfile has multiple FROM lines. Each FROM starts a new stage. You can build in one stage, and then copy only the final needed output into the last stage (which stays small and clean).

Build cache best practices

Docker tries to speed up builds using cache.

For each Dockerfile instruction, Docker checks if it already has a cached layer for that exact step. If yes, it reuses it, and the build is faster. If no, Docker builds a new layer.

But here’s the key: once the cache is broken at some step, Docker usually has to rebuild everything after that step too.

That’s why ordering your Dockerfile matters a lot.

Also, Docker checks files you copy. Even if the Dockerfile line didn’t change, if the content of files in the folder changed, the checksum will change and Docker will rebuild that layer.

If you ever want to ignore cache completely, you can build with:

  • docker image build --no-cache=true -t web:latest .

Squashing images

Sometimes images end up with many layers. Squashing can combine layers into fewer layers.

This can be helpful when you want fewer layers, but it has a downside: squashed images don’t share layers as efficiently. That can increase storage usage and make pushes/pulls bigger in some cases.

If you want to squash during build, you can add --squash to the build command.

Conclusion

So we learned that containerizing an app is mainly about writing a good Dockerfile and using it to build a clean image. Once you have an image, you can run it the same way on any Docker host, and you can share it by pushing to a registry. That's it.

Next up, we have docker-compose. It's the most fundamental topic for multi-stage builds. And we use it in production. So it's gonna be really fun learning that. But for now, create a simple app, containerize it using Dockerfile, and try it on your own and see if you understood the concept till now. Until then...