Building Scalable GO Application With Docker, AWS, and GitHub Actions

Building Scalable GO Application With Docker, AWS, and GitHub Actions

Part 2: Step-by-Step Guide to Dockerizing Your GO Application

Introduction

  • Hey, Gophers! I hope you're doing well. I'm back with Part 2 of the series. In the last blog, we learned about Docker and how it makes development easier with Containers.

  • We also created two containers: MongoDB and Redis.

  • Now that we have both running in Docker containers, it's time to build a custom image for a Go application. In this part, we'll learn about the Dockerfile, docker-compose file, and how to dockerize the application. By the end of this tutorial, you'll have a fully functional dockerized application that you can pull onto your machine and use without installing any dependencies.

  • Before that, I'll give you a quick overview of the project. I won't go into too much detail to keep the blog short. I'll just explain what the project is about, and then we'll get started with the main part. Let's do that!


Project Overview

  • This is a simple project, but using all this tech together will be really fun. The project is about making a post anonymously, kind of like a tiny version of Reddit. It's just for learning, so it's okay if it's not very big. We'll only ask for a username and password, nothing else. The user signs up and logs in with these two things.

  • Users can create posts, and they can also like and comment on other people's posts.

  • Here's the mind map of what we'll use in our Go app and which APIs we'll implement.

  • By the way, the project's name is Anonymous-GO.

Note: This blog requires basic knowledge of Go and how to work with APIs. I won't explain everything from the beginning. If you want to start from the basics, I already have a series on that. You might want to check it out first:

https://dhruvnakum.xyz/from-zero-to-go-hero-learn-go-programming-with-me-part-1

  • Here is the final project if you want to go through.

Setting Up Go Project

Project Structure

  • We will structure our project as follows:

  • In config/, we have functions for setting up Redis.

  • controllers/ contains all the main operations like auth, post, like, and comment. This is where we define all the route handlers.

  • In database/, we set up MongoDB and have the User and Post collections.

  • In middleware/, we define handlers for accessing private routes.

  • In models/, we define different models like User, Post, Comment, and Like.

  • In utils/, we have helper functions for JWT authentication, and for hashing and verifying passwords.

  • We also have the .env file for storing all the project secrets.

  • We will discuss docker-compose.yml, Dockerfile, and Dockerfile.dev in the next section.


Running MongoDB Containers

  • If you read the previous blog, you might have a MongoDB container ready inside your docker. Let’s first run that.


  • Before we start, we will use a MongoDB container for database storage and Redis Cloud for caching.

  • To run the project, we first need to start the MongoDB container to connect the database with the application.

  • When you run the project, you will see that both MongoDB and Redis are connected.

  • Let’s now dockerize this application.


Dockerize GO Application

  • Before we start, let’s recap things we learned about Docker in our previous blog.

What is Docker Image?

  • As we saw in our previous blog, Docker Image is a package/bundle that is used as a template to create containers.

  • It contains everything needed to run an application. The docker image contains the application code. In our case, it is our Go code.

  • It also contains dependencies that we use in our application, like any libraries, frameworks, etc.

  • Contains all the tools and settings.

  • But the thing is Docker image cannot run on its own. As it’s just a template. To run the Image, we need to create a Container, just like Class and Object, Class is nothing without an Object, right!?

What is Docker Container?

  • As Docker Image cannot run on its own, it requires Containers. Containers are basically running instances of a Docker Image.

  • You can think Docker Image as Class and Docker Containers as Objects. Class is nothing but a template and we can make use of class by creating it’s objects.

  • Every containers run in its isolated environment. And this containers runs exactly the same on every machine. Whether it’s Windows, Mac, or Linux.

  • As you can create multiple objects from a defined class. You can create multiple containers from the same image.


  • Now you might be asking that, Okay, I understand Image and Containers now. But how do we create an Image of our application right?

  • Well that’s where Dockerfile comes into picture.

Dockerfile

  • A Dockerfile is nothing but a set of instructions that tells Docker how to build a Docker Image.

  • Let’s see how we can write a docker file

FROM golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o main .

FROM alpine:latest
COPY --from=builder /app/main /main
COPY .env .
EXPOSE 3000
ENTRYPOINT ["/main"]
  • Let’s understand what’s happening here

  • First of all, we want Docker to compile our application, right? How do we do that? Well, Docker Hub already has the docker compiler image for us. It has all the tools, like go build go mod, etc.

  • So, we are using this docker image as our base. You might ask why we are using this image only and not the other one. The simple answer to that is it’s lightweight.

FROM golang:1.23-alpine
  • The line above does that task. We also added AS builder after it. This simply labels this stage as "builder," so we can refer to it later.

  • You might ask why we need builder a stage.

    • Well, we don’t want the final application to have every Go tool. It’s unnecessary. So, instead of storing unnecessary files in the final image, we directly build the app in this stage and then copy only the final compiled binary file into a new lightweight image.

    • This keeps the docker image small, fast, and secure.

WORKDIR /app
  • Here, we are creating a working directory inside the container. So, when we say /app, this means we are creating an app folder inside the container.

  • You might ask why we are creating app directory, can’t we just push all the things directly. Well, you can, but it’s not recommended. Why? Because If we don’t do this, every time we need to execute cd command manually to go inside /app folder

COPY go.mod go.sum ./
  • The COPY instruction is used to copy the files from your local machine to docker containers.

  • It will copy it inside /app as we previously set the current working directory as /app

  • You might ask why we are copying this. well, otherwise, our application won’t run. These files contain all the dependencies required by our application.

RUN go mod download
  • This command is used to download and cache all dependencies specified in go.mod
COPY . .
  • Now that we are done with the basic setup, we need to copy all the files and folders of our application into a docker container.

  • The . (dot) here refers to the current directory on your local system and second . (dot) refers to the current working directory inside the container.

RUN CGO_ENABLED=0 GOOS=linux go build -o main .
  • Now that we have copied our files, folders, and dependencies, we are ready to compile the application inside the container. This is what the above RUN instruction does.

  • go build -o main . tells Go to compile the application and generate an executable file named main.

  • The . at the end means compiling all Go files in the current directory (/app). which is main.go in this case.

  • CGO allows Go programs to use C libraries, but it also makes the binary dependent on system libraries. And we don’t want that, so we are disabling it by assigning 0 value.

  • GOOS=linux is used to build the application for Linux. As we are using Alpine Linux image, we need to do that to make it lightweight for the system.

FROM alpine:latest
  • Now that we have compiled the Go application in the previous builder stage. We now need to run the final binary which is created inside container.

  • To do that, we are using alpine container to create a lightweight container. This removes all the unnecessary libraries and tools.

  • In the previous stage (golang:1.23-alpine AS builder), we compiled the Go application.

  • Now, we only need to run the final binary, so we switch to a clean, minimal image.

So what exactly happens here:

  • Docker starts with a fresh Alpine Linux image.

  • No Go compiler or extra tools are installed, just a clean OS.

  • The final container only contains what's needed to run the application.

Why Not Just Use golang:1.23-alpine for Everything?

  • That’s a good question. It’s because The Go compiler and tools are only needed for building/compiling the app, not for running it. Keeping them in the final image wastes space and increases security risks. Using Alpine separately ensures a much smaller, optimized final container.
COPY --from=builder /app/main /main
  • This line copies the compiled Go application from the builder stage into the final Alpine-based image.

  • /app/main this is where the Go binary was built in the builder stage

  • And /main this is the destination inside the Alpine container

EXPOSE 3000
  • With this instruction, we are telling Docker that this container is expecting traffic at port 3000.
ENTRYPOINT ["/main"]
  • ENTRYPOINT ["/main"] tells Docker to execute the compiled Go binary (/main) inside the container.

  • So now, when you run the container, it automatically runs the /main binary.


How does this overall process work?

First Stage (builder)

  • Uses golang:1.23-alpine

  • Builds the Go application (main) inside /app

  • This stage has all the necessary tools (Go compiler, dependencies)

Final Stage (alpine:latest)

  • Starts with a clean Alpine Linux image

  • Copies only the main binary from /app/main into /main

  • The container is now ready to run the application


Does that mean we are using two Containers?

  • No, we are not creating two containers. Instead, we are creating two separate environments during the build process

  • Docker processes both stages during the image build phase.

  • It uses the first stage (builder) to compile the application.

  • It then discards the first stage and only keeps the final image from the second stage.

  • The container is created from this final, lightweight image.


docker-compose

  • Now our Dockerfile is ready. But right now, our app is only containerized. Remember, we are also using a MongoDB image in our app. If you try to run the app right now, it will throw an error because the app cannot connect to MongoDB.

Why Do We Need docker-compose.yml?

  • Our app and MongoDB are running in separate containers.

  • The app depends on MongoDB to function, but without a shared network, they cannot communicate.

  • Running both services manually using docker run commands is inefficient.

  • This is where Docker Compose comes into play.

  • Docker Compose allows us to define and run multiple containers as a single service, ensuring that all required services (our app and MongoDB) start together, communicate seamlessly, and work as a unit.

  • With a docker-compose.yml file, we can:

    • Define multiple services (e.g., app and mongo) in a single configuration.

    • Ensure MongoDB starts before the app using depends_on.

    • Use a shared network so the app can communicate with MongoDB by referring to it as mongo instead of localhost.

    • Set up environment variables for both services.

    • Use volumes to persist MongoDB data, preventing data loss when containers restart.

  • Now let’s write docker-compose for our app:

services:
  app:
    build:
      context: .
    ports:
      - "3000:3000"
    depends_on:
      - mongo
    environment:
      - MONGODB_URI=mongodb://admin:admin@mongo:27017/
      - JWT_SECRET=secret
      - REDIS_SECRET=secret
      - REDIS_ADDR=redis-11068.c261.us-east-1-4.ec2.redns.redis-cloud.com:11068
      - REDIS_USERNAME=default
      - REDIS_PASSWORD=DM4iBq2pTYNQkweBZmwqGKyDYrj872M8

  mongo:
    image: mongo:8.0
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: admin
    ports:
      - "27017:27017"
    volumes:
      - mongodata:/data/db

volumes:
  mongodata:
    driver: local
  • So, as we know, we need two services: app and Mongo

  • And that’s why we are defining those inside services

  1. app Service:

    • This service is used to build the app using Docekerfile.

    • So, we are defining its context with the current directory by specifying . (dot).

    • Then, we want to map the container port to the local system’s port (called port mapping). We do that inside port tag.

    • Also, we want mongo-db to run before our app starts. To achieve that, we have to use depends_on tag and specify mongo image.

    • Also, if you look at the project, we have specified many environmental variables. And so we want this app service to use that. So, we specify all the environment variables inside ennvironment tag.

  2. mongo Service:

    • We first need to pull up the Mongo image. We are doing it by specifying it inside the image tag.

    • As we did in our app, we are specifying env variables inside the environment tag.

    • Then, we need to expose port 27017 for database access.

    • Also, we want to keep the data in a persistent volume. So, we are using volumes


Running docker-compose

  • Now that we are done with the docker-compose file, we can now run the container using the following command.
docker compose up -d

  • It will start the container inside the docker.

a man with a mustache in a blue suit says tiny celebration in front of a group of people

  • As you can see, our entire application is now dockerized.

  • Now, let’s test our API to see if it’s working or not.

  • Let’s create a user with a signup API

  • Now, let’s test the login functionality with Redis caching.

  • As you can see, Redis-session is added to cookies for accessing private routes. Let’s also test private routes to make sure they're working.

  • And yes, it is working. And now, if you try to access private routes after logging out. It will not work. Let’s check that.

    - We logged out.

  • As you can see, it’s throwing an error.


What’s Next?

  • Currently, our APIs are working locally. But we want developers to use these APIs, right? To do that, we need to deploy this.

  • Now, there are many deployment options available in the market.

  • The most popular one is AWS.

  • In the next part, we are going to push our custom image to Docker Hub, set up GitHub actions, and Upload and Run this docker container inside the AWS EC2 instance.

  • It’s gonna be really good learning for anyone who is not familiar with AWS deployment. Because I learned a lot.

  • Let’s meet in the next one, until then…

Did you find this article valuable?

Support Dhruv Nakum by becoming a sponsor. Any amount is appreciated!