A Practical Journey from Application to Distributed Systems - Part 3
Inventory service + gRPC + proto contracts

In Part 2, we built the Orders service with proper folder structure, Postgres, migrations, and Docker Compose. That gave us a solid foundation.
In this part, we will add the second service: Inventory. We will also introduce gRPC and protobuf (proto files). This is the first time we connect services in a more structured way.
Even if you haven’t used gRPC before, don’t worry. I’ll explain the purpose and the moving parts in a simple way. This is going to be a really interesting one. Let's jump right into it
Why add Inventory as a separate service?
In the real world, inventory is not just a function. It has its own data and its own rules. For example:
Inventory needs to know how many items are left.
Inventory must stop two orders from reserving the same last item at the same time.
Inventory might be slow or unavailable, and Orders should not crash because of it.
By separating Inventory into its own service, we learn service boundaries early. This is important because later we will add Kafka and make the workflow async. But before we go async, it helps to first understand a simple service-to-service call.
What gRPC is and why we use it?
Most apps start with REST because it is simple and easy to understand. REST usually sends JSON, which is human readable and works really well for client apps like mobile and web apps. In our project, we are still using REST for the client side.
But inside the backend, services also need to talk to each other. For example, the Orders service needs to ask the Inventory service to reserve stock. For this kind of service-to-service communication, we want something more strict and reliable than manually writing JSON requests and responses.
That is where gRPC helps.
- With gRPC, we first define the API in a
.protofile. This file describes what function a service provides, what input it expects, and what output it returns. Both services then generate code from that file.
- Because both sides use code generated from the same
.protofile, they both follow the exact same request and response structure. This reduces mistakes and keeps communication clear.
3. gRPC is fast and efficient, but the biggest reason we use it here is not speed. The biggest benefit is that both services follow one shared definition, so they always agree on how to talk to each other.
You can think of a .proto file as a shared agreement between services.
What will we build in this part?
In this part, We will add:
A new service: inventory
A shared proto module that contains the contract
A gRPC server in Inventory
(For now) Inventory will keep stock in memory so we can focus on gRPC
We will run Orders + Inventory with Docker Compose
In later parts, Inventory will get its own Postgres and become durable. But here we start simple.
Step 1: Create the proto module
I already explained the overall concept of protobuf file above but if you still want to know more about it, I would recommend watching this video to understand why we need .proto file and what its advantages are over JSON and XML: What is Protocol Buffer
Now, once you are done watching it, from the repo root run:
mkdir -p proto/inventory/v1
cd proto
go mod init quickbite/proto
cd ..
This creates a separate Go module for proto-generated code. It helps because both services can depend on it without copying files around.
Step 2: Define the Inventory gRPC contract
This file defines how the Inventory service and any other service will communicate with each other using gRPC.
This .proto file is acting like a shared agreement between services. It tells both sides:
what service exists
what function it provides
what data should be sent
what data should be returned
Instead of manually guessing request and response formats, both services follow the same definition.
Create:
proto/inventory/v1/inventory.proto
syntax = "proto3"; // This tells Protobuf which version of the language you are using.
// This is the namespace of this proto. It helps organize things and avoid naming conflicts. Here its referring like inventory.v1 belongs to inventory version 1.
package inventory.v1;
// This is specifically for Go code generation. When you generate Go code from this .proto file, this tells the generator where the Go package should live
option go_package = "quickbite/proto/gen/go/inventory/v1;inventoryv1";
// This defines a gRPC service. Inside it, you define one RPC method: ReserveStock meaning now the InventoryService has one function called ReserveStock.
service InventoryService {
// This is the actual function definition. It says input type = ReserveStockRequest and output type = ReserveStockResponse.
// In plain language: Client sends a ReserveStockRequest, and server sends back a ReserveStockResponse.
rpc ReserveStock(ReserveStockRequest) returns (ReserveStockResponse);
}
// A message is like a data structure. It has two fields: sku and quantity
message ReserveStockRequest {
string sku = 1; // These numbers are field numbers.
int32 quantity = 2; // Protobuf uses these numbers internally when encoding/decoding data.
}
// This is the response structure.
message ReserveStockResponse {
bool reserved = 1;
}
This file is basically saying:
We have an Inventory service
It provides a function called
ReserveStockThat function takes a request with
skuandquantityIt returns a response telling us whether the reservation worked
Step 3: Generate Go code from the proto file (using Buf)
Buf is a tool for working with .proto files. We use Buf because it keeps protobuf tooling consistent across machines. It also avoids “it works on my laptop” problems.
Instead of everyone using different local setups, Buf makes the code generation process predictable and consistent.
Create: proto/buf.yaml
version: v1
- This tells Buf, this folder is a Buf module. You can think of it as a small config file that marks this folder as the place where our protobuf definitions live.
Create: proto/buf.gen.yaml
version: v1
plugins:
- plugin: buf.build/protocolbuffers/go
- out: gen/go
- opt: paths=source_relative
- plugin: buf.build/grpc/go
- out: gen/go
- opt: paths=source_relative
This file tells Buf how to generate code.
It uses two plugins:
buf.build/protocolbuffers/go
This generates normal Go code for protobuf messages.buf.build/grpc/go
This generates gRPC-specific Go code, such as the client and server interfaces.
The generated files will be written inside:
proto/gen/goThe option
paths=source_relativemeans Buf will keep the generated folder structure similar to the original.protofile structure.
Now run below command to generate the code:
docker run --rm -v "$(pwd)":/workspace -w /workspace/proto bufbuild/buf:1.34.0 generate
This command runs Buf inside Docker, so you do not need to install Buf locally on your machine. That makes the setup easier and keeps the generation process the same for everyone.
Step 4: Create a Go workspace (go.work)
So go.work is a file that helps Go work with multiple local modules together during development.
You might ask why we need it?
- At this point in the project, we have separate modules like:
services/orders, proto
Each one has its own go.mod file, so by default Go treats them as separate projects. But our Orders service wants to import generated code from the Proto module locally.
Without a go.work file, Go may think:
“Where do I download this module from?”
and it may expect that module to exist on GitHub or somewhere else online.
That is not what we want during local development. We want Go to understand that both modules already exist inside the same repo.
So let's create one. From the repo root run:
go work init ./services/orders ./proto
It creates a go.work file at the repo root. What it basically tells Go is:
“These local folders are Go modules. While I am developing, treat them as part of one workspace.”
Step 5: Make Orders depend on the proto module
Now we need to tell the Orders module that it depends on the Proto module.
From repo root:
cd services/orders
go mod edit -require=quickbite/proto@v0.0.0
go mod tidy
cd ../..
This adds the Proto module as a dependency of the Orders module.
In simple words, this tells Go:
“The Orders service needs code from the Proto module.”
That is important because later the Orders service will import the generated gRPC and protobuf code from the Proto module.
Why @v0.0.0?
We are still working locally, and this module is not published anywhere yet. So v0.0.0 is just used as a placeholder version.
Sometimes when you run this step, Go may show an error about the module path being invalid. This happens because names like:
quickbite/proto
quickbite/orders
quickbite/inventory
are not valid full Go module paths. Go module paths usually need a proper domain-style prefix, such as:
example.com/quickbite/proto
example.com/quickbite/orders
example.com/quickbite/inventory
To fix this issue, we need to update module paths to valid example.com/quickbite/... paths and regenerate protos.
Inside: inventory.proto, all go.mod, main.go, http.go wherever you see:
quickbite/proto, quickbite/orders, quickbite/inventory update it with example.com/quickbite/proto, example.com/quickbite/orders and example.com/quickbite/inventory
Also update any old Go imports to use the new example.com/... paths.
After changing the module paths, regenerate the protobuf Go code so the generated files use the updated import paths.
cd proto
docker run --rm -v "$(pwd)/..":/workspace -w /workspace/proto bufbuild/buf:1.34.0 generate
This regenerates the Go code from your .proto file with the new package paths.
In short, In this step, we made Orders depend on the Proto module so it can use the generated gRPC code. If Go complains about invalid module paths, we fix that by switching to proper example.com/quickbite/... module names and regenerating the protobuf code.
Step 6: Create the Inventory service module
Now we will create the Inventory service, just like we created the Orders service.
First, create the folder for the Inventory service:
mkdir -p services/inventory/cmd/inventory
cd services/inventory
This gives us a standard Go service structure. The cmd/inventory folder will later contain the main.go file that starts the Inventory service.
Now initialize the Inventory module:
go mod init example.com/quickbite/inventory
This creates a go.mod file for the Inventory service.
In simple words, this tells Go:
“This folder is its own Go module.”
Next, add the gRPC library:
go get google.golang.org/grpc
This adds the Go gRPC runtime, which we need because the Inventory service will expose a gRPC server.
Now add protobuf support:
go get google.golang.org/protobuf
This is needed because the Go code generated from the .proto file depends on the protobuf library.
Now tell the Inventory module that it depends on the shared Proto module:
go mod edit -require=example.com/quickbite/proto@v0.0.0
Why we need it? Because the generated gRPC and protobuf code lives inside the Proto module. So the Inventory service needs to declare:
“I depend on the Proto module.”
That way, Inventory can import and use the generated code later.
Again, v0.0.0 is just a placeholder version because we are working locally.
Finally run go mod tidy and add Inventory to the workspace
go work use ./services/inventory
This updates go.work so Go treats Inventory as part of the same local workspace as: Order, Proto
To conclude, In this step, we created the Inventory service as a separate Go module, added the dependencies it needs for gRPC and protobuf, connected it to the shared Proto module, and added it to the local Go workspace.
Step 7: Write the Inventory gRPC server
Now we will create the Inventory service itself. This service will:
listen on port
9090expose the gRPC method
ReserveStockkeep stock in memory for now
safely update stock when requests come in
Create: services/inventory/cmd/inventory/main.go
package main
import (
"context"
"log"
"net"
"sync"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
inventoryv1 "quickbite/proto/gen/go/inventory/v1" // generated Go code in proto
)
// This is our Actual gRPC server
type inventoryServer struct {
inventoryv1.UnimplementedInventoryServiceServer
mu sync.Mutex // lock for handling multiple requests at the same time
stock map[string]int32 // inventory data. Its just an in memory Go map for now.
}
// creates the server with starting stock
func newInventoryServer() *inventoryServer {
return &inventoryServer{
stock: map[string]int32{
"burger": 5,
"pizza": 10,
},
}
}
// The gRPC method
// This is the server implementation of the method defined in the proto:
// So whenever a client calls ReserveStock, this Go function runs.
func (s *inventoryServer) ReserveStock(ctx context.Context, req *inventoryv1.ReserveStockRequest) (*inventoryv1.ReserveStockResponse, error) {
// First this checks the request is valid.
if req.GetSku() == "" {
return nil, status.Error(codes.InvalidArgument, "sku is required")
}
if req.GetQuantity() <= 0 {
return nil, status.Error(codes.InvalidArgument, "quantity must be > 0")
}
// Then it locks the mutex, checks how much stock is available, and compares it to the requested quantity.
s.mu.Lock()
defer s.mu.Unlock()
// check availability
available := s.stock[req.Sku]
if available < req.Quantity {
return nil, status.Error(codes.FailedPrecondition, "insufficient stock")
}
// deduct stock
s.stock[req.Sku] = available - req.Quantity
return &inventoryv1.ReserveStockResponse{Reserved: true}, nil
}
// This main function starts the server
// It opens port 9090, creates a new gRPC server, registers our inventoryServer, and starts listening for requests.
// So once this program is running, the Inventory service is ready to accept gRPC calls on port 9090.
func main() {
lis, err := net.Listen("tcp", ":9090")
if err != nil {
log.Fatalf("listen failed: %v", err)
}
grpcServer := grpc.NewServer()
inventoryv1.RegisterInventoryServiceServer(grpcServer, newInventoryServer())
log.Println("inventory gRPC listening on :9090")
log.Fatal(grpcServer.Serve(lis))
}
This file creates a simple gRPC server for the Inventory service. It uses the Go code generated from our .proto file, which is why we import:
inventoryv1 "example.com/quickbite/proto/gen/go/inventory/v1"
That generated package contains the request and response types, along with the gRPC server interface we need to implement.
Step 8: Add Inventory Dockerfile
Now we need a Dockerfile for the Inventory service so we can package it into a container image.
Create: services/inventory/Dockerfile
FROM golang:1.26.1 AS builder // This starts from an image that already has Go installed.
// sets the working directory inside the container.
WORKDIR /src/services/inventory
// copies only the Inventory module dependency files first.
COPY services/inventory/go.mod services/inventory/go.sum ./
// copies the Proto module’s go.mod and go.sum into:
COPY proto/go.mod proto/go.sum /src/proto/
// downloads Go dependencies. At this point, Docker only has the module files, not all source code yet.
RUN go mod download
// Now copy the full Proto module source into the container.
COPY proto /src/proto
// copy the full Inventory service source code into the current working directory.
COPY services/inventory .
// This compiles the app.
RUN CGO_ENABLED=0 GOOS=linux go build -o /inventory ./cmd/inventory
// Now you start a completely new image.
FROM gcr.io/distroless/static:nonroot
// Set working directory to root.
WORKDIR /
// copies the compiled binary from the builder stage into the final runtime image.
COPY --from=builder /inventory /inventory
// Run the container as a non-root user.
USER nonroot:nonroot
EXPOSE 9090
// tells Docker what to run when the container starts.
ENTRYPOINT ["/inventory"]
This Dockerfile builds the Inventory service into a container image.
It uses a two-stage build:
builder stage - compiles the Go application
runtime stage - runs only the compiled binary in a very small image
This keeps the final image smaller, cleaner, and more secure.
Step 9: Update the docker-compose.yml
Now we add the Inventory service to docker-compose.yml so Docker Compose can build and run it for us.
inventory:
build:
context: .
dockerfile: services/inventory/Dockerfile
ports:
- "9090:9090"
This tells Docker Compose:
where to find the Dockerfile for Inventory
how to build the Inventory image
which port should be exposed
Step 10: Testing
Now let’s test whether the Inventory service is working correctly.
First, start the project with:
docker-compose up --build
If everything starts correctly, you should see a log message like this:
inventory gRPC listening on :9090
That tells us the Inventory service is running and listening for gRPC requests on port 9090.
And because it is gRPC, you can’t easily test it with curl.
Why can we not easily use curl with gRPC
Since this service uses gRPC, we cannot test it as easily as a normal REST API with curl.
REST usually works with: HTTP, JSON, and normal endpoints like /orders.
gRPC is different. It uses:
protobuf messages
a stricter contract
a different communication format
So for testing gRPC services, we usually use a tool like grpcurl.
On Mac, install it with: brew install grpcurl
Then run the command below:
grpcurl \
-plaintext \
-import-path proto \
-proto inventory/v1/inventory.proto \
-d '{"sku":"burger","quantity":2}' \
localhost:9090 \
inventory.v1.InventoryService/ReserveStock
Output: You should see reserved
What we learned
In this part, we added a second service and introduced the idea of strict service contracts.
We learned:
why
.protofiles are useful for service-to-service APIshow code generation works, from
.protofiles to Go codehow to run a gRPC server in Go
how to keep multiple services aligned by sharing a Proto module
how a
go.workworkspace makes local development easier
At first, this can feel like a lot of moving parts. But once everything is connected, the benefit becomes clear. Instead of each service guessing request and response shapes, both services follow one shared contract. That makes communication safer, cleaner, and easier to maintain.
Conclusion
At this point, we now have two services running locally:
Orders, which uses REST
Inventory, which uses gRPC
We also have a shared contract that defines the Inventory API in a clean and strict way.
This is an important step because it introduces a real pattern used in many backend systems: public APIs are often kept simple and client-friendly with REST, while internal service-to-service communication uses stronger contracts with gRPC.
In the next part, we will connect Orders and Inventory together. Once they start talking to each other, we will quickly run into the kinds of problems that make distributed systems interesting: timeouts, failures, retries, and handling requests that are still in progress. That will naturally lead us to the next step in the project: Kafka and event-driven workflows.
See you in the next one...




