Skip to main content

Command Palette

Search for a command to run...

A Practical Journey from Application to Distributed Systems - Part 1

Project overview + Why this architecture

Published
7 min read
A Practical Journey from Application to Distributed Systems - Part 1
D
I'm a mobile/web developer 👨‍💻 who loves to build projects and share valuable tips for programmers Follow me for Flutter, React/Next.js, and other awesome tech-related stuff 😉

When I started this project, my goal was simple. I wanted to learn modern backend and distributed system tools in a way that feels real. I didn’t want to just watch tutorials or memorize commands. I wanted to build something and understand why each tool exists and how it works.

So I built this app, a food ordering system. And don't think it's JUST a food ordering system. It has a lot of concepts, tools, etc., working internally, which we will learn soon. There is gonna be a mobile app, and the backend is in Go. Over time, I added tools like Docker, PostgreSQL, gRPC, Kafka (Redpanda), and Kubernetes (kind). I also added important reliability patterns like Outbox, Idempotency, and Dead Letter Queues (DLQ). These are the same ideas that show up in real production systems.

In this series, I’ll share what I built and what I learned, step by step, in the same order I learned it. Each post will explain the “why” first, then the “how”.

https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExdHlqNjV2Ym50ZnUzb3NhcW1rdHlydHRiMXV4bnN2cnJzZTBmemVvbyZlcD12MV9naWZzX3NlYXJjaCZjdD1n/5zf2M4HgjjWszLd4a5/giphy.gif


What we are building

QuickBite is a food ordering app with a very simple flow:

  1. A user places an order from the app.

  2. The backend creates the order and marks it as PENDING.

  3. Inventory checks if the item is available.

  4. The backend updates the order to CONFIRMED or CANCELLED.

This looks simple, doesn't it? But it’s a great project because it naturally leads to real backend topics.


The final shape of the system

By the end of this project, QuickBite will have these parts:

  • Mobile app: creates orders and shows the order status.

  • Orders service (Go): exposes a REST API for the mobile app. It stores orders in PostgreSQL.

  • Inventory service (Go): checks and reserves stock. It stores stock and reservations in PostgreSQL.

  • Kafka: used for events so services can work asynchronously.

  • Kubernetes: used to run everything in a local cluster, like a small production environment.

Even though I built it locally, it follows the same direction a real company system follows. Later, I can move it to AWS, but the learning already happens with local tools.


Why do we need services and events?

At the beginning, it’s tempting to build everything inside one backend. That is fine for learning basics. But once you introduce “inventory”, things get more interesting.

Inventory is not just a function call. Inventory is a part of the business. It has its own rules, its own data, and its own failures. For example, even if the Orders service is working, Inventory might be down. Or Kafka might be down. Or the database might be slow. These failures are normal in real systems, and your code must handle them.

That is why I separated the backend into services and used events:

  • Orders create an order and publish OrderCreated.

  • Inventory reads OrderCreated, reserves stock, and publishes InventoryReserved or InventoryRejected.

  • Orders read the inventory result and update the order status.

Don't worry, we will learn in detail what Services and Events are. But this design makes the system more realistic. It also teaches the most important idea in distributed systems: your system must work even when parts of it fail.


The main workflow (with an example)

Let’s say a user orders burger with quantity 2.

  1. The mobile app sends a request:

    • POST /v1/orders

    • payload: { "userId": "u1", "sku": "burger", "quantity": 2 }

  2. Orders service writes a row in Postgres:

    • status = PENDING
  3. Orders publishes a Kafka event: (We will learn about Kafka in detail so don't stress if you don't know what it is)

    • topic: orders.v1

    • event type: OrderCreated

    • includes: orderId, sku, quantity, eventId

  4. Inventory consumes the event and checks stock in its own database:

    • If enough stock exists, it reserves it and stores a reservation record.
  5. Inventory publishes back a result event:

    • topic: inventory.v1

    • InventoryReserved or InventoryRejected

  6. Orders consume that result and update the order row:

    • CONFIRMED if reserved

    • CANCELLED if rejected

This whole flow is async. That means the user might see PENDING for a short time. This is normal and expected. In the mobile app, I used polling to show the status updates. Later, we can improve this with streaming, but polling is perfect for learning.


What makes this project 'production-style'?

A lot of demos stop at “it works once”. But real systems need more than that. In this app, I also implemented patterns that solve real production problems.

Outbox pattern (so events are not lost)

A common failure is that the service writes to the database successfully, but fails to publish the event. If that happens, the system becomes inconsistent. The order exists in the DB, but Inventory never hears about it.

To fix this, I used the Outbox pattern:

  • Save the event into an outbox_events table in the same database transaction as the order write.

  • A background worker publishes those outbox events to Kafka.

  • If Kafka is down, the outbox keeps retrying later.

This makes the system safer.

Idempotency (so duplicates don’t break things)

Kafka is typically “at least once”. That means a consumer can sometimes receive the same message more than once. This is not a bug. It’s part of the design.

So both services must handle duplicates safely:

  • Orders uses a processed_events table to ignore duplicate inventory events.

  • Inventory uses a reservations table keyed by order_id so it won’t reserve twice for the same order.

DLQ (dead letter queue) for poison messages

Sometimes you get a message that can never be processed. For example, a message that is not valid JSON. If your consumer keeps retrying forever, it blocks progress.

So I added DLQ topics:

  • inventory.dlq.v1

  • orders.dlq.v1

Poison messages are sent to DLQ and committed, so the system keeps moving.

Again, You don't need to understand all these in detail. I am just giving you the overview of the whole project. So just read it out and keep it in your mind that we are gonna be implementing these concepts.


What was the real learning part for me?

https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExbTNsYXNhenZlaXQ4cDMxMWZkbTJlaXdiNTFsMWsxa25xY3k1OGsxdyZlcD12MV9naWZzX3NlYXJjaCZjdD1n/xUXCTpkqwLeh2SrZ4T/giphy.gif

To make sure the system is not fragile, I tested failures on purpose. This is where I learned the most.

Here are examples of the failure tests I ran:

  • I stopped Kafka and created an order. The API still accepted the request because of the Outbox. When Kafka came back, events were published, and the order was completed.

  • I restarted Inventory and confirmed that stock and reservations were still correct because they were stored in Postgres.

  • I produced invalid JSON messages to Kafka and confirmed they went to the DLQ instead of breaking consumers.

These tests are not extra work. They are part of understanding how distributed systems behave.


What this series will cover next

In the next post, I’ll start with the very first step: building the Orders service in Go and connecting it to Postgres. I’ll explain:

  • project structure (cmd, internal)

  • basic REST endpoints

  • Dockerfile basics

  • Compose basics

  • the first DB schema and migrations

The goal is that you can follow the series and not only rebuild and copy-paste, but also understand why each piece exists.


One thing I learned while building this is tools like Kafka and Kubernetes are not hard because of syntax. They feel hard because they introduce new ways of thinking. The best way to learn them is to build a project where failures happen naturally, and you fix them step by step.

That’s exactly what we’re doing here.

See you in the next blog...

https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNWk1YW9yajVrbDluaDJ3ODVubGhrNGQ2amFiZjRrbXU5OTBjb3d0diZlcD12MV9naWZzX3NlYXJjaCZjdD1n/Py1LkJpCEtNiFo0Chr/giphy.gif