Skip to main content

Command Palette

Search for a command to run...

A Practical Journey from Application to Distributed Systems - Part 4

Connect Orders ↔ Inventory (first working service-to-service flow)

Published
16 min read
A Practical Journey from Application to Distributed Systems - Part 4
D
I'm a mobile/web developer 👨‍💻 who loves to build projects and share valuable tips for programmers Follow me for Flutter, React/Next.js, and other awesome tech-related stuff 😉

In Part 3, we created the Inventory service and defined its API using gRPC and a .proto file. Both services were running locally, but they were still separate pieces.

In this part, we connect them for the first time.

The goal is simple: when a new order is created, the Orders service will call the Inventory service to reserve stock. If stock is available, the order can move forward. If not, the order will be cancelled.

This is an exciting step because QuickBite now starts behaving like a real multi-service backend, where one service depends on another to complete a request.


What we are building in this part

We will keep the same public API for the app:

  • POST /v1/orders

  • GET /v1/orders/{id}

So from the client’s point of view, nothing changes.

But now POST /v1/orders will do more work behind the scenes:

  1. create an order in Postgres with status PENDING

  2. call the Inventory service over gRPC: ReserveStock(sku, quantity)

  3. if Inventory reserves the stock, update the order to CONFIRMED

  4. if Inventory rejects the request, update the order to CANCELLED

This means the order status is no longer just stored data. It now reflects real business logic and the result of communication between two services.


Why we connect with gRPC first (before Kafka)

Later in the project, we will move this flow to Kafka and make it asynchronous. But before jumping there, it helps to first build a simple synchronous version that works.

That way, we can understand the basic interaction clearly:

  • Orders calls Inventory

  • Inventory responds

  • Orders updates its state based on the result

gRPC works well for this because it gives us a strict shared contract, typed request and response objects, and built-in support for deadlines and timeouts.

This part is really about learning how two services communicate directly, and how to handle success and failure in that setup, before moving to a more advanced event-driven design.


Step 1: Add "UpdateStatus" to the Orders store

The Orders service already knows how to create an order. Now it also needs a way to update the order status after the Inventory service responds.

Open:

services/orders/internal/store/orders.go

Add this method:

func (s *OrdersStore) UpdateStatus(ctx context.Context, id int64, status string) error {
	_, err := s.db.Exec(ctx, `UPDATE orders SET status = \(2 WHERE id = \)1`, id, status)
	return err
}

What this method does

This method updates the status column of an existing order.

It takes:

  • ctx - the request context

  • id - the order ID

  • status - the new status value, such as CONFIRMED or CANCELLED

Internally, it runs this SQL:

UPDATE orders SET status = \(2 WHERE id = \)1

So if the order ID is 1 and the new status is CONFIRMED, it updates that row in the database.


Step 2: Create an Inventory gRPC client inside Orders

Now the Orders service needs a way to talk to the Inventory service. To keep the rest of the code clean, we will create a small client wrapper.

First, create the folder:

mkdir -p services/orders/internal/inventory

Now create the file:

services/orders/internal/inventory/client.go

package inventory

import (
	"context"
	"time"

	"google.golang.org/grpc"
	"google.golang.org/grpc/credentials/insecure"

	inventoryv1 "example.com/quickbite/proto/gen/go/inventory/v1"
)

// this struct keeps two things: conn, api
type Client struct {
	conn *grpc.ClientConn // the underlying gRPC connection
	api  inventoryv1.InventoryServiceClient // the generated gRPC client from the proto package
}

// this function creates a new Inventory client.
func New(addr string) (*Client, error) {
	ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
	defer cancel()

    // it connects to the Inventory service using DialContext
    // we wrap it in timeout so that if Inventory is down, Order will not hand forever whole trying to connect.
	conn, err := grpc.DialContext(
		ctx,
		addr,
    
    // for local development, we use insecure transport cuz the services are communicating inside Docker Compose on an internal network. Layer we could switch to TLS for production.
	grpc.WithTransportCredentials(insecure.NewCredentials()),
		grpc.WithBlock(), // by default gRPC may return immediately and connect in the background so we use WithBlock so the call waits until the connection is actually ready.
	)
	if err != nil {
		return nil, err
	}

	return &Client{
		conn: conn,
		api:  inventoryv1.NewInventoryServiceClient(conn),
	}, nil
}

// this closes the gRPC connection when the orders service shuts down.
func (c *Client) Close() error {
	return c.conn.Close()
}

// This method will be used by Order service.
func (c *Client) Reserve(ctx context.Context, sku string, qty int32) error {
    //creates the short request timeout
	ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
	defer cancel()
    
    // Then it calls the inventory gRPC method
	_, err := c.api.ReserveStock(ctx, &inventoryv1.ReserveStockRequest{
		Sku:      sku,
		Quantity: qty,
	})
	return err
}

This file creates a small wrapper around the generated gRPC client.

Instead of letting the HTTP handler deal with low-level gRPC setup, we move that logic into a dedicated package. That way, the rest of the Orders service can use a simple method like:

client.Reserve(ctx, "burger", 2)

What to understand here

1) We use DialContext with a timeout

If the Inventory service is down, we do not want the Orders service to wait forever. The timeout makes failure happen quickly and clearly.

2) We use insecure transport for local development

Since the services are talking to each other inside Docker Compose, we keep it simple for now. Later, we can add TLS when we move to more production-like environments.

3) We treat success as “no error”

Notice that Reserve(...) does not return true or false.

Instead:

  • if stock is reserved successfully - return nil

  • if Inventory rejects the request - return an error

This keeps the Orders-side logic simple. The handler only needs to check:

  • no error - proceed

  • error - handle failure

Step 3: Update Orders main.go to create the Inventory client

Now that we have a small gRPC client wrapper for Inventory, the Orders service needs to create that client when it starts.

Open:

services/orders/cmd/orders/main.go

Add this import:

inventory "example.com/quickbite/orders/internal/inventory"

Then create the client using an address from the environment:

invAddr := getenv("INVENTORY_GRPC_ADDR", "localhost:9090")

// ...db logic

// This creates gRPC client connection to the Inventory service.
invClient, err := inv.New(invAddr)
if err != nil {
	log.Fatalf("inventory client connect failed: %v", err)
}
defer invClient.Close()

This code creates the Inventory gRPC client when the Orders service starts. getenv("INVENTORY_GRPC_ADDR", "localhost:9090") reads the Inventory service address from an environment variable. If the variable is not set, it falls back to localhost:9090.

That makes local development easier, because the Orders service knows where to find Inventory by default. Later, in Docker Compose, this value will usually be something like:

inventory:9090

because services talk to each other using service names inside the Docker network.

Why do this in main()?

We create the Inventory client in main() because startup is the place where we usually create long-lived dependencies such as db connections, external clients, configuration, HTTP servers.

Then we pass those dependencies into the parts of the app that need them.

That keeps the code organized and makes the application easier to test and maintain.

Now the Orders service has an Inventory client, but the HTTP handlers still do not know about it yet. So the next step is to update the HTTP server constructor and pass this client into it. That way, when someone creates an order, the handler can call Inventory through gRPC.

Step 4: Update Orders HTTP server to use Inventory

Now we connect the Orders HTTP layer to the Inventory gRPC client.

Open:

services/orders/internal/http/http.go

4.1 Update the Server struct and constructor

Add imports:

inv "example.com/quickbite/orders/internal/inventory"

Then update the Server struct so it holds both:

  • the Orders store

  • the Inventory client

type Server struct {
	store *store.OrdersStore
	inv   *inv.Client
}

func NewServer(store *store.OrdersStore, invClient *inv.Client) *Server {
	return &Server{store: store, inv: invClient}
}

This means the HTTP server can now talk to both:

  • Postgres, through the store

  • Inventory, through the gRPC client

4.2 Update handleCreateOrder to call Inventory

Now we update the create-order handler so it does real business work.

The new flow is:

  1. create the order as PENDING

  2. call Inventory to reserve stock

  3. update the order status based on the result

Replace your create handler with this version:

func (s *Server) handleCreateOrder(w http.ResponseWriter, r *http.Request) {
	var req createOrderRequest
    // first we decode the JSON and check the fields.
	if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
		http.Error(w, "invalid json", http.StatusBadRequest)
		return
	}

	req.UserID = strings.TrimSpace(req.UserID)
	req.Sku = strings.TrimSpace(req.Sku)

	if req.UserID == "" {
		http.Error(w, "userId is required", http.StatusBadRequest)
		return
	}
	if req.Sku == "" {
		http.Error(w, "sku is required", http.StatusBadRequest)
		return
	}
	if req.Quantity <= 0 {
		http.Error(w, "quantity must be > 0", http.StatusBadRequest)
		return
	}

	ctx, cancel := context.WithTimeout(r.Context(), 5*time.Second)
	defer cancel()

	// 1) Create order as PENDING
    // At this point, the order is stored in Postgres with a PENDING status.
	id, err := s.store.Create(ctx, req.UserID, req.Note, req.Sku, req.Quantity)
	if err != nil {
		http.Error(w, "db error", http.StatusInternalServerError)
		return
	}

	// 2) Ask Inventory to reserve stock
	err = s.inv.Reserve(ctx, req.Sku, int32(req.Quantity))
	if err != nil {
		// If reserve fails, mark order cancelled
		_ = s.store.UpdateStatus(ctx, id, store.StatusCancelled)

		// We handle "insufficient stock" differently from "inventory down"
		st, ok := status.FromError(err)
		if ok && st.Code() == codes.FailedPrecondition {
			http.Error(w, "insufficient stock", http.StatusConflict) // 409
			return
		}

		http.Error(w, "inventory unavailable", http.StatusServiceUnavailable) // 503
		return
	}

	// 3) Success -> CONFIRMED
	_ = s.store.UpdateStatus(ctx, id, store.StatusConfirmed)

	writeJSON(w, http.StatusCreated, createOrderResponse{ID: id})
}

You will need these extra imports in http.go:

"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"

Why we create the order first

You might wonder why we insert the order before reserving stock. The reason is that we want to create a real record as early as possible. That gives us:

  • a real order ID

  • a traceable record in the database

  • a history of what happened, even if Inventory fails

So if stock reservation fails, we still know, the order was attempted and it was later cancelled.

This becomes even more useful later when we move to Kafka and event-driven processing, because keeping a record early fits naturally with async workflows.

Step 5: Update Orders main.go server creation

Back in:

services/orders/cmd/orders/main.go

you previously created the HTTP server like this:

srv := httpapi.NewServer(orderStore)

Now that the Orders HTTP server also needs access to the Inventory gRPC client, update it to:

srv := httpapi.NewServer(orderStore, invClient)

Earlier, the HTTP server only needed the Orders store because it was only talking to Postgres. Now the create-order handler also needs to call the Inventory service through gRPC.

That means the HTTP server needs access to two dependencies: orderStore and invClient.

So this line is simply passing both of those dependencies into the server when it is created.

Step 6: Update Docker Compose (connect containers using service name)

Now we need to make sure the Orders service can reach the Inventory service when both are running inside Docker Compose.

In the root docker-compose.yml, make sure the Inventory service exists, like this:

inventory:
    build:
      context: ./services/inventory
    ports:
      - "9090:9090"

Then update the orders service environment to include:

orders:
    build:
        ...
    environment:
        ...
        INVENTORY_GRPC_ADDR: "inventory:9090"

Why inventory:9090 works

Inside Docker Compose, each service name becomes a hostname. So inventory resolves to the inventory container automatically. This is one of the most useful Compose features.

That means the service named inventory: can be reached by other containers using inventory. So when Orders connects to inventory:9090 it is connecting to the inventory container on port 9090,

Step 7: Run the full system

Now it is time to run the full system and see both services working together.

From the repo root, start fresh with:

docker compose down -v

ISSUES You might run into while running the container:

  1. Missing Go module dependencies
  • The Orders service started importing:

    • example.com/quickbite/proto/...

    • google.golang.org/grpc

    but those dependencies were not listed in services/orders/go.mod.

  • That meant the Orders module knew about the import paths in code, but its module file did not yet declare them properly.

To fix this: I added the missing dependencies:

  • example.com/quickbite/proto

  • google.golang.org/grpc

In services/orders/go.mod file

I also added

replace example.com/quickbite/proto => ../../proto

This follows the same local-development pattern used by Inventory. It tells Go to use the local proto folder instead of looking for that module somewhere remote.

After that, I ran: go mod tidy so go.mod and go.sum were updated correctly.

  1. Docker build could not see the shared Proto module
  • The Orders Docker build was using:

    context: ./services/orders
    
  • That meant Docker only saw files inside the services/orders directory.

  • But the Orders service also depends on the shared proto module, which lives outside that folder. So during the build, Docker could not access the proto directory. That is why the build failed.

To fix that:

I updated the Orders Dockerfile to match the Inventory Dockerfile structure.

The main changes were:

  • use repo-root paths

  • set WORKDIR to /src/services/orders

  • copy the shared Proto module into /src/proto

  • build the binary with: go build -d /orders

Update the /orders/Dockerfile with this:

# Build stage: compiles the Go library.
# Start from the image that has the Go compiler. Name this stage as "builder".
FROM golang:1.26.1 AS builder
# Set the working directory in the container to /app.
WORKDIR /src/services/orders
# Copy go.mod and go.sum in that dir
COPY services/orders/go.mod services/orders/go.sum ./
COPY proto/go.mod proto/go.sum /src/proto/
# Downloads the dependencies.
RUN go mod download 
# Copy the remaining files
COPY proto /src/proto
COPY services/orders .
# Compile your go app into a single executable file.
RUN CGO_ENABLED=0 GOOS=linux go build -o /orders ./cmd/orders

# Run stage: runs image with the compiled binary.
# gcr.io/distroless/static:nonroot -> minimal Linux image without any additional packages.
FROM gcr.io/distroless/static:nonroot 
# Set the working directory in the container to /.
WORKDIR / 
# Copt the built file from stage 1 (builder) and copy it into this final image. So now the final image contains basically /orders (your executable)
COPY --from=builder /orders /orders
# run the program as non-root user and not as admin/root for better security. If someone gains access to the container, they won't have root access.
USER nonroot:nonroot 
# And the a label saying 8080, this container will listen on port 8080. You still need -p 8080:8080 to map the container port to the host port.
EXPOSE 8080 
# When the container starts, runs "/orders"
ENTRYPOINT ["/orders"]

Also in docker-compose.yml change the Order build section to use the repo root as the build context:

This is important because now the Docker build can see both: services/orders and proto. Without that, the Dockerfile would still not be able to access the shared Proto module.

Now finally run:

docker compose up --build

Make sure everything is running:

docker compose ps

You should see your main containers running, such as: orders, inventory, db, redpanda.

Step 8: Test it end-to-end

Now let’s test the full flow and make sure Orders and Inventory are actually working together.

8.1 Successful order

Create an order:

curl -s -X POST localhost:8080/v1/orders \
  -H "Content-Type: application/json" \
  -d '{"userId":"u1","sku":"burger","quantity":2,"note":"grpc test"}'

If everything works, you should get a response like:

{"id":1}

Now fetch that order:

curl -s localhost:8080/v1/orders/1

You should see:

  • status: CONFIRMED

8.2 Insufficient stock (should cancel)

Now try creating an order with a quantity that is too large:

curl -i -X POST localhost:8080/v1/orders \
  -H "Content-Type: application/json" \
  -d '{"userId":"u2","sku":"burger","quantity":99,"note":"too many"}'

This time, Inventory should reject the reservation because there is not enough stock. You should get: 409 Conflict and message insufficient stock.

8.3 Inventory down (should return 503)

Now let’s test what happens when one service is unavailable.

In another terminal, stop Inventory.

docker compose stop inventory

Then try creating another order:

curl -i -X POST localhost:8080/v1/orders \
  -H "Content-Type: application/json" \
  -d '{"userId":"u3","sku":"pizza","quantity":1,"note":"inventory down"}'

This time, Orders cannot reach Inventory at all. You should see 503 Service Unavailable.

That means the Orders service handled the failure in a controlled way instead of hanging forever or crashing.

Why this test matters

This is an important moment because it introduces a core idea in distributed systems:

services can fail, and your application must handle that cleanly

In a single-service app, failures are already important. In a multi-service system, they become even more important because one service may be healthy while another is down or unreachable.

That is why this step matters so much. It teaches us how to think about:

  • success

  • business rejection, such as insufficient stock

  • infrastructure failure, such as an unavailable service

What we learned in Part 4

In this part, we connected two services and handled real business outcomes.

We learned how to:

  • create a gRPC client in Go

  • call another service with timeouts

  • handle gRPC errors and map them to HTTP responses

  • use Docker Compose networking for service discovery

  • update order state based on service-to-service results

This is the first truly distributed step in the project. Orders is no longer working as a standalone service. It now depends on Inventory to complete part of its workflow.

Conclusion

We now have a working flow where creating an order actually checks inventory and updates the order status. This is a big milestone because it introduces service-to-service communication and real failure cases.

But this approach still has one big limitation: it is synchronous. If Inventory is slow or down, Orders is directly affected. In real systems, this often becomes a bottleneck.

That is exactly why the next part matters.

In Part 5, we will introduce Kafka and move this flow to an event-driven approach. That allows Orders to accept requests quickly, and Inventory to process them asynchronously, with better reliability and scaling options.