A Practical Journey from Application to Distributed Systems - Part 2
Orders service + Postgres + migrations + Docker/Compose

In Part 1, I explained what QuickBite is and what we are trying to learn. In this part, we build the first real backend service: Orders.
This post follows the same structure as the final project we will build. That means we do it in a clean way from the beginning: proper folders, Postgres migrations, and Docker Compose.
Even if you are new to backend systems, this part will make sense because we will move slowly and explain why each file exists.
WARNING: This is going to be a LONG read so make sure you have enough time to read.
What we are building in this part
We will build the Orders service with these endpoints:
GET /healthz-> basic health checkPOST /v1/orders-> create a new orderGET /v1/orders/{id}-> fetch an order
We will store orders in a Postgres database, and we will run everything locally using Docker Compose.
Project structure
Inside the repo, this is the structure we use:
quickbite/
services/
orders/
cmd/
orders/
main.go
internal/
db/
db.go
http/
http.go
store/
orders.go
migrations/
000001_create_orders_table.up.sql
000001_create_orders_table.down.sql
Dockerfile
go.mod
docker-compose.yml
Why this structure matters
This layout is common in real Go services:
cmd/orders/main.gois the entry point. It wires things together and starts the server.internal/contains code that should be used only by this service.internal/httpcontains routing + request/response handling.internal/storecontains database queries.internal/dbcontains DB connection logic.migrations/contains versioned SQL changes (your DB schema history).
This separation keeps the code easy to grow. When the project becomes bigger (Kafka, outbox, DLQ), you won’t end up with one giant main.go.
Step 1: Create the Orders module
From repo root:
mkdir -p services/orders
cd services/orders
go mod init quickbite/orders
This creates go.mod. It tells Go the module name and tracks dependencies.
Make sure you have Go installed on your system.
Step 2: Add database migrations
For someone who doesn't know what Migration is:
A migration is a saved database update.
Instead of putting database setup code directly in your app, you create small files that describe each change, like creating a table or updating data.
The changes run one by one in the correct order, and the system keeps track of which ones have already been applied.
Create the migrations folder:
mkdir -p migrations
Now create:
services/orders/migrations/000001_create_orders_table.up.sql
CREATE TABLE IF NOT EXISTS orders (
id BIGSERIAL PRIMARY KEY, -- gives each order an automatic unique number.
user_id TEXT NOT NULL,
note TEXT NOT NULL DEFAULT '',
status TEXT NOT NULL DEFAULT 'PENDING', -- new orders start as "PENDING" unless changed.
sku TEXT NOT NULL DEFAULT '',
quantity INT NOT NULL DEFAULT 1,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX IF NOT EXISTS idx_orders_user_id ON orders(user_id);
CREATE INDEX IF NOT EXISTS idx_orders_status ON orders(status);
CREATE INDEX IF NOT EXISTS idx_orders_sku ON orders(sku);
This SQL creates an orders table to store order records, but only if it does not already exist.
The three CREATE INDEX lines make searches faster when looking up orders by:
user_id, status, sku
Second, create down.sql file:
services/orders/migrations/000001_create_orders_table.down.sql
DROP TABLE IF EXISTS orders;
Why are migrations better than “create table" in code?
So, when you deploy software, code changes, and database changes must move together safely, right? And migrations make the database changes visible, repeatable, and versioned in Git. And so, if you change your schema later, you add a new migration file instead of quietly changing code.
Step 3: Write the DB connection helper
Now that we have our migration files in place, let's connect our application to the PostgreSQL database. And in order to do that, we need to create a helper function called MustConnectWithRetry. Which will try to connect it with proper error handling.
Let's create: services/orders/internal/db/db.go
package db
import (
"context"
"log"
"time"
"github.com/jackc/pgx/v5/pgxpool"
)
// Keeps trying to connect to Postgres until it works, or until it gives up and crashes.
func MustConnectWithRetry(dsn string) *pgxpool.Pool {
var lastErr error
// it will try upto 30 times with the sleep of 0.5 second.
for i := 0; i < 30; i++ {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
db, err := pgxpool.New(ctx, dsn)
cancel()
if err == nil {
ctx2, cancel2 := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel2()
// if ping succeeds return the pool and you are connected.
pingErr := db.Ping(ctx2)
if pingErr == nil {
return db
}
lastErr = pingErr
db.Close()
} else {
lastErr = err
}
time.Sleep(500 * time.Millisecond)
}
log.Fatalf("db connect failed after retries: %v", lastErr)
return nil
}
You might ask, why are we writing the logic of retrying?
When the service starts, PostgreSQL may not be ready yet, especially when running with Docker Compose. Instead of failing immediately, this helper function retries the connection several times before giving up.
It creates a Postgres connection pool, checks that the database is reachable with a ping, and returns the pool only when the connection is confirmed to be working. Each attempt is wrapped in a timeout so the application does not hang forever, and a short delay between retries gives the database time to finish starting.
Step 4: Write the database store
Now that the application can connect to PostgreSQL, the next step is to define how orders are stored and retrieved. This file is responsible for the data-access layer for orders. In other words, it contains the code that talks directly to the database.
Create: services/orders/internal/store/orders.go
package store
import (
"context"
"errors"
"time"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
)
// These constants are for the status of the order
const (
StatusPending = "PENDING"
StatusConfirmed = "CONFIRMED"
StatusCancelled = "CANCELLED"
)
// This is how our Order model gonna look like
type Order struct {
ID int64
UserID string
Note string
Status string
Sku string
Quantity int
CreatedAt time.Time
}
type OrdersStore struct {
db *pgxpool.Pool
}
func NewOrdersStore(db *pgxpool.Pool) *OrdersStore {
return &OrdersStore{db: db}
}
// This method inserts a new row into the orders table and returns the generated order ID.
func (s *OrdersStore) Create(ctx context.Context, userID, note, sku string, quantity int) (int64, error) {
var id int64
err := s.db.QueryRow(ctx,
`INSERT INTO orders (user_id, note, status, sku, quantity)
VALUES (\(1, \)2, \(3, \)4, $5)
RETURNING id`,
userID, note, StatusPending, sku, quantity,
).Scan(&id)
return id, err
}
// Getting the order with ID from DB when user request it.
func (s *OrdersStore) GetByID(ctx context.Context, id int64) (Order, error) {
var o Order
err := s.db.QueryRow(ctx,
`SELECT id, user_id, note, status, sku, quantity, created_at
FROM orders WHERE id = $1`,
id,
).Scan(&o.ID, &o.UserID, &o.Note, &o.Status, &o.Sku, &o.Quantity, &o.CreatedAt)
if err != nil {
if errors.Is(err, pgx.ErrNoRows) {
return Order{}, pgx.ErrNoRows
}
return Order{}, err
}
return o, nil
}
We start by defining an Order struct, which represents how an order looks in the application, along with constants for the possible order statuses. Using constants keeps the status values consistent across the codebase and avoids hardcoded strings scattered in multiple places.
The OrdersStore type wraps the Postgres connection pool and exposes methods for creating and retrieving orders. The Create method inserts a new row into the orders table and returns the generated ID, while GetByID fetches an order and maps the selected columns into an Order struct.
Step 5: Write the HTTP layer routes & validation
Now that the database layer is ready, the next step is to expose that functionality through HTTP endpoints. This server layer connects incoming API requests to the order store.
In simple terms, this is the part of the application that receives requests like “create an order” or “fetch order 1” and turns them into database operations.
Create: services/orders/internal/http/http.go
package http
import (
"context"
"encoding/json"
"errors"
"net/http"
"strconv"
"strings"
"time"
"github.com/go-chi/chi/v5"
"github.com/jackc/pgx/v5"
"quitebite/orders/internal/store"
)
// We need to create a server and this server has the Orderstore. This will give HTTP handlers an access to the store to call its methods
type Server struct {
store *store.OrdersStore
}
func NewServer(store *store.OrdersStore) *Server {
return &Server{store: store}
}
// Lets create Routes now and returns the HTTP router for this service.
func (s *Server) Routes() http.Handler {
r := chi.NewRouter()
// Health check endpoint.This used to verify the service is running.
r.Get("/healthz", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("ok"))
})
// Create a new order.
r.Post("/v1/orders", s.handleCreateOrder)
// Get an existing order by id.
r.Get("/v1/orders/{id}", s.handleGetOrder)
return r
}
// Here we are creating a `createOrderRequest`This tells the client what data it needs when sending a create order request
type createOrderRequest struct {
UserID string `json:"userId"`
Sku string `json:"sku"`
Quantity int `json:"quantity"`
Note string `json:"note"`
}
// Similarly `createOrderResponse`. This tells client what response to expect in return
type createOrderResponse struct {
ID int64 `json:"id"`
}
// handleCreateOrder handles POST /v1/orders
func (s *Server) handleCreateOrder(w http.ResponseWriter, r *http.Request) {
var req createOrderRequest
// Decode request JSON into req.
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "invalid json", http.StatusBadRequest)
return
}
// Remove leading/trailing spaces from important string fields.
req.UserID = strings.TrimSpace(req.UserID)
req.Sku = strings.TrimSpace(req.Sku)
// Validate required fields.
if req.UserID == "" {
http.Error(w, "userId is required", http.StatusBadRequest)
return
}
if req.Sku == "" {
http.Error(w, "sku is required", http.StatusBadRequest)
return
}
if req.Quantity <= 0 {
http.Error(w, "quantity must be > 0", http.StatusBadRequest)
return
}
// Create a request-scoped context with timeout
// so DB work does not hang forever.
ctx, cancel := context.WithTimeout(r.Context(), 3*time.Second)
defer cancel()
// Insert a new pending order into the database.
id, err := s.store.CreatePending(ctx, req.UserID, req.Note, req.Sku, req.Quantity)
if err != nil {
http.Error(w, "db error", http.StatusInternalServerError)
return
}
// Return the created order id.
writeJSON(w, http.StatusAccepted, createOrderResponse{ID: id})
}
// getOrderResponse is the JSON shape returned to the client
// when fetching an order.
type getOrderResponse struct {
ID int64 `json:"id"`
UserID string `json:"userId"`
Note string `json:"note"`
Status string `json:"status"`
Sku string `json:"sku"`
Quantity int `json:"quantity"`
CreatedAt string `json:"createdAt"`
}
// handleGetOrder handles GET /v1/orders/{id}
func (s *Server) handleGetOrder(w http.ResponseWriter, r *http.Request) {
// Read the "id" path parameter from the URL.
idStr := chi.URLParam(r, "id")
// Convert id from string to int64.
id, err := strconv.ParseInt(idStr, 10, 64)
if err != nil || id <= 0 {
http.Error(w, "invalid id", http.StatusBadRequest)
return
}
// Create a timeout for the DB lookup.
ctx, cancel := context.WithTimeout(r.Context(), 3*time.Second)
defer cancel()
// Fetch the order from the database.
o, err := s.store.GetByID(ctx, id)
if err != nil {
// If no row exists, return 404.
if errors.Is(err, pgx.ErrNoRows) {
http.Error(w, "not found", http.StatusNotFound)
return
}
// Any other DB issue becomes 500.
http.Error(w, "db error", http.StatusInternalServerError)
return
}
// Convert the store model into the API response shape.
resp := getOrderResponse{
ID: o.ID,
UserID: o.UserID,
Note: o.Note,
Status: o.Status,
Sku: o.Sku,
Quantity: o.Quantity,
CreatedAt: o.CreatedAt.UTC().Format(time.RFC3339), // standard JSON-friendly timestamp
}
// Return the order as JSON.
writeJSON(w, http.StatusOK, resp)
}
// writeJSON is a helper function to send JSON responses.
func writeJSON(w http.ResponseWriter, status int, v any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
// Encode the value as JSON and write it to the response body.
_ = json.NewEncoder(w).Encode(v)
}
The Routes method creates a Chi router and registers three endpoints: a health check, a route for creating orders, and a route for fetching an order by ID. This is the entry point that turns the application into a web service.
For the create-order endpoint, we define a request struct that matches the JSON sent by the client. The handler decodes the JSON body, trims unnecessary whitespace, validates the required fields, and then uses a request-scoped timeout before calling the store layer. If the insert succeeds, it returns the new order ID as JSON.
For the fetch-order endpoint, the handler reads the id from the URL path, validates it, queries the store, and maps the database model into a response struct. If the order does not exist, it returns 404 Not Found; otherwise, it returns the order as JSON with a properly formatted timestamp.
Step 6: Write the entrypoint
At this point, we have all the building blocks: database connection logic, the order store, and the HTTP server. The main function is where everything is assembled and the application actually starts.
In simple terms, this is the entry point of the service. Let's create: services/orders/cmd/orders/main.go
package main
import (
"log"
"net/http"
"os"
"strings"
"quitebite/orders/internal/db"
httpapi "quitebite/orders/internal/http"
"quitebite/orders/internal/store"
)
func main() {
// getting the port and database url from the `env`
port := getenv("PORT", "8080")
dsn := getenv("DATABASE_URL", "postgres://postgres:postgres@localhost:5432/orders?sslmode=disable")
// checking the database connection
pool := db.MustConnectWithRetry(dsn)
defer pool.Close()
// creating an order store
orderStore := store.NewOrdersStore(pool)
// creating http server
srv := httpapi.NewServer(orderStore)
addr := ":" + port
log.Printf("orders service listening on %s", addr)
// starting the server and listening to the port
log.Fatal(http.ListenAndServe(addr, srv.Routes()))
}
// helper function to get the env values
func getenv(key, def string) string {
v := strings.TrimSpace(os.Getenv(key))
if v == "" {
return def
}
return v
}
It begins by reading configuration from environment variables. The service uses PORT to decide which port to listen on and DATABASE_URL to connect to PostgreSQL. Default values are provided so the application can run locally without additional setup.
Next, the code connects to the database using MustConnectWithRetry, which keeps startup reliable in environments where Postgres may take a moment to become available. Once the connection pool is ready, it is passed into NewOrdersStore, which creates the store layer responsible for database operations.
The store is then injected into NewServer, which creates the HTTP API server. Finally, the application logs the address it is listening on and calls http.ListenAndServe to begin accepting requests.
This is how everything works together:
Step 7: Add dependencies and build locally
Now that we have everything in place. Let's add all the required dependencies.
First, we will add chi router package and pgxpool which is the connection pool package from PostgreSQL. So instead of opening one fresh DB connection for every query, your app asks the pool for a connection, uses it, and returns it for reuse.
From services/orders: run
go get github.com/go-chi/chi/v5
go get github.com/jackc/pgx/v5/pgxpool
go mod tidy
This will install the dependencies
Step 8: Dockerfile
Now that the application is working locally, the next step is to package it into a container image. This Dockerfile uses a multi-stage build, which means we build the Go binary in one stage and run it in a much smaller image in the final stage.
This approach keeps the final image lightweight, secure, and production-friendly.
# Build stage: compiles the Go library.
# Start from the image that has the Go compiler. Name this stage as "builder".
FROM golang:1.23 AS builder
# Set the working directory in the container to /app.
WORKDIR /app
# Copy go.mod and go.sum in that dir
COPY go.mod go.sum ./
# Downloads the dependencies.
RUN go mod download
# Copy the remaining files
COPY . .
# Compile your go app into a single executable file.
RUN CGO_ENABLED=0 GOOS=linux go build -o orders ./cmd/orders
# Run stage: runs image with the compiled binary.
# gcr.io/distroless/static:nonroot -> minimal Linux image without any additional packages.
FROM gcr.io/distroless/static:nonroot
# Set the working directory in the container to /.
WORKDIR /
# Copt the built file from stage 1 (builder) and copy it into this final image. So now the final image contains basically /orders (your executable)
COPY --from=builder /app/orders /orders
# run the program as non-root user and not as admin/root for better security. If someone gains access to the container, they won't have root access.
USER nonroot:nonroot
# And the a label saying 8080, this container will listen on port 8080. You still need -p 8080:8080 to map the container port to the host port.
EXPOSE 8080
# When the container starts, runs "/orders"
ENTRYPOINT ["/orders"]
You might ask why we are using multi-stage build. The reason behind it is that it separates building from running. If you used only a single image, your final container would also contain: Go toolchain, module cache, source code, etc. That would make the image larger and increase the attack surface.
With a multi-stage build, the final image contains only what is needed to run the service.
Step 9: Docker Compose
At this point, we have a database container, a migration step, and the Go API service. Docker Compose lets us define all three in one file and run them together as a small local system.
This is useful because the application does not depend on just one container. It depends on Postgres being available, the schema being created, and only then the API starting up.
services:
# service name: db
db:
image: postgres:16
# these env variables are read by the postgres image on first startup to initialize the database.
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: orders
ports:
- "5432:5432" # left side is your machine's port, right side is container port.
# this is where postgres will store its data.
volumes:
- dbdata:/var/lib/postgresql/data
# healthcheck is Compose asking: Are you ready?
healthcheck:
# pg_isready checks if the DB is ready to accept connections.
test: ["CMD-SHELL", "pg_isready -U postgres -d orders"]
interval: 2s # check every 2 seconds.
timeout: 3s # each check can take up to 3 seconds.
retries: 30 # try 30 times before giving up.
# this is a one time job container that runs the migrations.
migrate:
image: migrate/migrate:v4.17.1 # popular migration CLI
# mounts your local migrations folder to the container.
volumes:
- ./services/orders/migrations:/migrations
command: ["-path", "/migrations", "-database", "postgres://postgres:postgres@db:5432/orders?sslmode=disable", "up"]
depends_on:
db:
condition: service_healthy # this makes sure the db is ready before running the migrations.
orders:
# compose will run docker build using Dockerfile in ./services/orders directory. It creates an image for your Go service.
build:
context: ./services/orders
environment:
PORT: "8080"
DATABASE_URL: "postgres://postgres:postgres@db:5432/orders?sslmode=disable"
ports:
- "8080:8080"
# compose will start db container before orders container starts.
depends_on:
migrate:
# this makes sure the migrations are completed successfully before starting the orders container.
condition: service_completed_successfully
# define a volume called dbdata. without this, data would be lost when the container is stopped.
volumes:
dbdata:
3 Services here:
1) The db service
Starts a PostgreSQL database for the orders system. It also sets the default database name, username, and password, and exposes port
5432so the database can be reached from outside the container if needed.A volume named
dbdatais attached so the data is saved even if the container is removed and started again. The health check is especially important here because it keeps checking whether PostgreSQL is actually ready to accept connections, rather than just assuming the container is usable as soon as it starts.
2) The migrate service
It is responsible for applying the database migrations. In other words, it prepares the database schema before the application begins using it.
It mounts the local migrations folder into the container and runs the migration tool with the database connection string. This means the database tables and structure are created from the migration files automatically, instead of relying on manual setup. It also depends on the database being healthy first, which prevents the migration tool from running too early.
3) The orders service
- It is the actual Go application. It is built from the code inside
./services/orders, gets its configuration through environment variables, and exposes port8080so the API can be reached from the browser, Postman, or another service. It depends on themigrateservice completing successfully, which ensures the app only starts after the database is fully prepared.
Step 10: Run and test
Now it's time to put everything into test and check to make sure it works as expected. After setting up the database, migrations, and API service, the next job is to start the system and verify that each part is responding correctly.
From the root of your app run below commands:
docker compose down -v
docker compose up --build
The first command stops any existing containers and removes their volumes. This helps you start from a clean state, which is useful during testing because it avoids leftover data from earlier runs.
The second command rebuilds the images and starts the services defined in docker-compose.yml. Rebuilding ensures your latest code and configuration changes are included.
Once the containers are running, test the health endpoint:
curl -s localhost:8080/healthz
This sends a request to the service’s health check endpoint. This endpoint is a quick way to verify that the application is up and responding before you test any business functionality. If this request fails, there is no point in testing the rest of the API yet.
Next, let's create an order and check that endpoint:
curl -s -X POST localhost:8080/v1/orders -H "Content-Type: application/json" -d '{"userId":"u1","sku":"burger","quantity":2,"note":"part 2 test"}'
This makes a POST request to the /v1/orders endpoint with a JSON payload containing:
userId: the user placing the ordersku: the item being orderedquantity: how many units to createnote: an optional note attached to the order
If the create request returns something like {"id":1}, You can fetch that order with:
curl -s localhost:8080/v1/orders/1
What we learned
In this part, we built the Orders service. We did not keep everything in one file, and we did not rely on a magic setup. Instead, we created a clean structure (cmd/ and internal/), wrote the database queries in one place, and kept the HTTP logic separate from the database logic. This makes the code easier to read today and easier to grow later.
We also connected the service to PostgreSQL and introduced migrations, which is a big milestone. Migrations solve a common real-world problem: your database schema changes over time, and you need a safe and repeatable way to apply those changes across different machines and environments. By running migrations automatically with Docker Compose, we made our local setup much closer to what a production pipeline looks like.
At this point, we have a working base: we can create orders, store them in Postgres, and fetch them using an API. This is the foundation we will build on in the next parts. In the next post, we’ll introduce the next service (Inventory) and start connecting services together, which is where the system begins to feel like a real distributed application.
See you in the next one...




