Episode 6 — Scaling Reliability Microservices Web3 / 6.2 — Building and Orchestrating Microservices

6.2.a — Independent Services Setup

In one sentence: Each microservice runs as its own process with its own codebase, dependencies, configuration, port, and database — allowing independent development, deployment, and scaling.

Navigation: <- 6.2 Overview | 6.2.b — API Gateway Pattern ->


1. What Makes a Service "Independent"?

A microservice is truly independent when you can:

  • Build it without compiling other services
  • Deploy it without redeploying other services
  • Scale it without scaling other services
  • Choose its tech stack without affecting other services
  • Restart it without bringing down other services

This independence comes from physical separation — separate codebases, separate processes, separate data stores, and communication only through well-defined APIs or events.

Monolith:                         Microservices:
┌─────────────────────┐           ┌──────────┐ ┌──────────┐ ┌──────────┐
│  Users  │  Orders   │           │  User    │ │  Order   │ │  Notif   │
│  ───────┼────────   │           │  Service │ │  Service │ │  Service │
│  Notifs │  Payments │           │  :4001   │ │  :4002   │ │  :4003   │
│  ───────┼────────   │           │  own DB  │ │  own DB  │ │  own DB  │
│    ONE PROCESS      │           └──────────┘ └──────────┘ └──────────┘
│    ONE DATABASE     │            3 processes   3 databases   3 deploys
└─────────────────────┘

2. Project Structure: Monorepo vs Polyrepo

There are two main ways to organize microservice code.

Monorepo (all services in one repository)

my-platform/
├── services/
│   ├── user-service/
│   │   ├── package.json
│   │   ├── .env
│   │   ├── Dockerfile
│   │   └── src/
│   │       ├── index.js
│   │       ├── routes/
│   │       └── models/
│   ├── order-service/
│   │   ├── package.json
│   │   ├── .env
│   │   ├── Dockerfile
│   │   └── src/
│   │       ├── index.js
│   │       ├── routes/
│   │       └── models/
│   └── notification-service/
│       ├── package.json
│       ├── .env
│       ├── Dockerfile
│       └── src/
│           ├── index.js
│           ├── routes/
│           └── models/
├── shared/
│   └── utils/
│       ├── logger.js
│       └── healthcheck.js
├── docker-compose.yml
└── README.md

Polyrepo (each service in its own repository)

github.com/myorg/user-service/
github.com/myorg/order-service/
github.com/myorg/notification-service/
github.com/myorg/shared-utils/        (npm package)

Comparison

FactorMonorepoPolyrepo
Setup complexityLower — one cloneHigher — multiple repos
Code sharingEasy — direct importsHarder — publish shared packages
CI/CDMust filter changes per serviceNatural isolation
Dependency managementShared lockfile possibleFully independent
Team autonomyLower — shared repo rulesHigher — team owns repo
Best forSmall-medium teams, < 10 servicesLarge orgs, many teams

For this guide we use monorepo because it is simpler to demonstrate.


3. Building Three Independent Express Services

3.1 User Service (port 4001)

// services/user-service/src/index.js
const express = require('express');
const app = express();
app.use(express.json());

const PORT = process.env.PORT || 4001;
const SERVICE_NAME = 'user-service';

// In-memory store (replace with database in production)
const users = [
  { id: '1', name: 'Alice', email: 'alice@example.com' },
  { id: '2', name: 'Bob', email: 'bob@example.com' },
];

// Health check — every service needs one
app.get('/health', (req, res) => {
  res.json({
    service: SERVICE_NAME,
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
  });
});

// Get all users
app.get('/users', (req, res) => {
  console.log(`[${SERVICE_NAME}] GET /users`);
  res.json({ data: users });
});

// Get user by ID
app.get('/users/:id', (req, res) => {
  const user = users.find((u) => u.id === req.params.id);
  if (!user) {
    return res.status(404).json({ error: 'User not found' });
  }
  res.json({ data: user });
});

// Create user
app.post('/users', (req, res) => {
  const { name, email } = req.body;
  if (!name || !email) {
    return res.status(400).json({ error: 'name and email required' });
  }
  const newUser = {
    id: String(users.length + 1),
    name,
    email,
  };
  users.push(newUser);
  console.log(`[${SERVICE_NAME}] Created user: ${newUser.id}`);
  res.status(201).json({ data: newUser });
});

app.listen(PORT, () => {
  console.log(`[${SERVICE_NAME}] running on port ${PORT}`);
});
// services/user-service/package.json
{
  "name": "user-service",
  "version": "1.0.0",
  "main": "src/index.js",
  "scripts": {
    "start": "node src/index.js",
    "dev": "nodemon src/index.js"
  },
  "dependencies": {
    "express": "^4.18.2"
  },
  "devDependencies": {
    "nodemon": "^3.0.0"
  }
}
# services/user-service/.env
PORT=4001
SERVICE_NAME=user-service
DB_URL=mongodb://localhost:27017/users_db

3.2 Order Service (port 4002)

// services/order-service/src/index.js
const express = require('express');
const axios = require('axios');
const app = express();
app.use(express.json());

const PORT = process.env.PORT || 4002;
const SERVICE_NAME = 'order-service';
const USER_SERVICE_URL = process.env.USER_SERVICE_URL || 'http://localhost:4001';

const orders = [];

app.get('/health', (req, res) => {
  res.json({
    service: SERVICE_NAME,
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
  });
});

// Get all orders
app.get('/orders', (req, res) => {
  console.log(`[${SERVICE_NAME}] GET /orders`);
  res.json({ data: orders });
});

// Create order — calls User Service to validate user exists
app.post('/orders', async (req, res) => {
  const { userId, product, quantity } = req.body;
  if (!userId || !product || !quantity) {
    return res.status(400).json({ error: 'userId, product, quantity required' });
  }

  try {
    // Inter-service call: verify user exists
    const userResponse = await axios.get(`${USER_SERVICE_URL}/users/${userId}`, {
      timeout: 3000, // 3-second timeout
    });

    const order = {
      id: String(orders.length + 1),
      userId,
      userName: userResponse.data.data.name,
      product,
      quantity,
      status: 'pending',
      createdAt: new Date().toISOString(),
    };

    orders.push(order);
    console.log(`[${SERVICE_NAME}] Created order: ${order.id} for user: ${userId}`);
    res.status(201).json({ data: order });
  } catch (err) {
    if (err.response && err.response.status === 404) {
      return res.status(400).json({ error: `User ${userId} not found` });
    }
    console.error(`[${SERVICE_NAME}] User service error:`, err.message);
    res.status(503).json({ error: 'User service unavailable' });
  }
});

app.listen(PORT, () => {
  console.log(`[${SERVICE_NAME}] running on port ${PORT}`);
});
// services/order-service/package.json
{
  "name": "order-service",
  "version": "1.0.0",
  "main": "src/index.js",
  "scripts": {
    "start": "node src/index.js",
    "dev": "nodemon src/index.js"
  },
  "dependencies": {
    "axios": "^1.6.0",
    "express": "^4.18.2"
  },
  "devDependencies": {
    "nodemon": "^3.0.0"
  }
}
# services/order-service/.env
PORT=4002
SERVICE_NAME=order-service
USER_SERVICE_URL=http://localhost:4001
DB_URL=mongodb://localhost:27017/orders_db

3.3 Notification Service (port 4003)

// services/notification-service/src/index.js
const express = require('express');
const app = express();
app.use(express.json());

const PORT = process.env.PORT || 4003;
const SERVICE_NAME = 'notification-service';

const notifications = [];

app.get('/health', (req, res) => {
  res.json({
    service: SERVICE_NAME,
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
  });
});

// Get all notifications
app.get('/notifications', (req, res) => {
  res.json({ data: notifications });
});

// Send notification (called by other services or via events)
app.post('/notifications', (req, res) => {
  const { userId, type, message } = req.body;
  if (!userId || !type || !message) {
    return res.status(400).json({ error: 'userId, type, message required' });
  }

  const notification = {
    id: String(notifications.length + 1),
    userId,
    type,       // 'email', 'sms', 'push'
    message,
    sentAt: new Date().toISOString(),
    status: 'sent',
  };

  notifications.push(notification);
  console.log(`[${SERVICE_NAME}] Sent ${type} to user ${userId}: ${message}`);
  res.status(201).json({ data: notification });
});

app.listen(PORT, () => {
  console.log(`[${SERVICE_NAME}] running on port ${PORT}`);
});

4. Running All Services Locally

Option A: Multiple terminal tabs

# Terminal 1
cd services/user-service && npm run dev

# Terminal 2
cd services/order-service && npm run dev

# Terminal 3
cd services/notification-service && npm run dev

Option B: npm-run-all (single terminal)

// Root package.json
{
  "name": "my-platform",
  "scripts": {
    "dev:user": "cd services/user-service && npm run dev",
    "dev:order": "cd services/order-service && npm run dev",
    "dev:notif": "cd services/notification-service && npm run dev",
    "dev:all": "npm-run-all --parallel dev:user dev:order dev:notif"
  },
  "devDependencies": {
    "npm-run-all": "^4.1.5"
  }
}

Option C: Docker Compose (recommended)

# docker-compose.yml
version: '3.8'

services:
  user-service:
    build: ./services/user-service
    ports:
      - "4001:4001"
    environment:
      - PORT=4001
      - SERVICE_NAME=user-service
    restart: unless-stopped

  order-service:
    build: ./services/order-service
    ports:
      - "4002:4002"
    environment:
      - PORT=4002
      - SERVICE_NAME=order-service
      - USER_SERVICE_URL=http://user-service:4001
    depends_on:
      - user-service
    restart: unless-stopped

  notification-service:
    build: ./services/notification-service
    ports:
      - "4003:4003"
    environment:
      - PORT=4003
      - SERVICE_NAME=notification-service
    restart: unless-stopped

Notice how USER_SERVICE_URL uses the Docker service name (user-service) instead of localhost. Docker Compose creates an internal network where services resolve each other by name.


5. Dockerfile for Each Service

Every service gets its own Dockerfile:

# services/user-service/Dockerfile
FROM node:20-alpine

WORKDIR /app

# Copy dependency files first (layer caching)
COPY package*.json ./

# Install production dependencies only
RUN npm ci --only=production

# Copy application code
COPY src/ ./src/

# Expose the service port
EXPOSE 4001

# Health check
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:4001/health || exit 1

# Start the service
CMD ["node", "src/index.js"]
# Build and run a single service
docker build -t user-service ./services/user-service
docker run -p 4001:4001 --name user-svc user-service

# Or build and run all with Compose
docker-compose up --build

6. Service Discovery Concepts

When you have dozens of services, hardcoding URLs becomes impossible. Service discovery solves this.

The problem

// Hardcoded — breaks when services move or scale
const USER_SERVICE = 'http://192.168.1.50:4001';

// What if user-service scales to 3 instances?
// What if it moves to a different host?
// What if the port changes?

Discovery approaches

ApproachHow It WorksExample
DNS-basedServices register DNS names; DNS resolves to current IPsDocker Compose, Kubernetes
Registry-basedServices register with a central registry; clients query itConsul, Eureka, etcd
Load balancerAll traffic goes through LB; LB knows where services areAWS ALB, Nginx
Environment variablesURLs injected at deploy timeDocker Compose environment:
Sidecar proxyEach service has a local proxy that handles routingIstio, Linkerd (service mesh)

Docker Compose gives you DNS-based discovery for free

# In docker-compose.yml, services can reach each other by name:
# http://user-service:4001    (not http://localhost:4001)
# http://order-service:4002
# http://notification-service:4003

Kubernetes takes it further

# Kubernetes Service object
apiVersion: v1
kind: Service
metadata:
  name: user-service
spec:
  selector:
    app: user-service
  ports:
    - port: 4001
      targetPort: 4001
# Now any pod can reach http://user-service:4001
# Kubernetes handles load balancing across replicas

7. Configuration Management

Each service needs its own configuration. Never share .env files across services.

// shared/utils/config.js — reusable config loader
require('dotenv').config();

function getConfig() {
  const required = ['PORT', 'SERVICE_NAME'];
  const missing = required.filter((key) => !process.env[key]);

  if (missing.length > 0) {
    console.error(`Missing required env vars: ${missing.join(', ')}`);
    process.exit(1);
  }

  return {
    port: parseInt(process.env.PORT, 10),
    serviceName: process.env.SERVICE_NAME,
    dbUrl: process.env.DB_URL || null,
    logLevel: process.env.LOG_LEVEL || 'info',
    nodeEnv: process.env.NODE_ENV || 'development',
  };
}

module.exports = { getConfig };

Config per environment

services/user-service/
├── .env                  # Local development defaults
├── .env.test             # Test environment overrides
├── .env.production       # Production values (never commit!)

Rule: .env files with real secrets should be in .gitignore. Use environment variable injection (Docker, Kubernetes Secrets, AWS SSM) in production.


8. Health Checks and Readiness

Every service must expose a health endpoint. This is non-negotiable.

// Liveness — "Is the process alive?"
app.get('/health', (req, res) => {
  res.json({ status: 'ok' });
});

// Readiness — "Can the service handle requests?"
// (checks database connection, external dependencies)
app.get('/ready', async (req, res) => {
  try {
    // Check database connection
    await db.ping();
    // Check any required external services
    res.json({ status: 'ready' });
  } catch (err) {
    res.status(503).json({ status: 'not ready', error: err.message });
  }
});

Docker, Kubernetes, and load balancers all use these endpoints to decide whether to send traffic to an instance.


9. Key Takeaways

  1. Each service is a standalone application — its own package.json, port, .env, Dockerfile, and (ideally) database.
  2. Monorepo for small teams, polyrepo for large orgs — but either way, each service must be independently deployable.
  3. Docker Compose provides DNS-based service discovery for local development — services reference each other by name, not IP.
  4. Health checks are mandatory — liveness for "is it alive?", readiness for "can it serve traffic?"
  5. Never share databases between services — that creates hidden coupling and defeats the purpose of microservices.

Explain-It Challenge

  1. Your colleague says "let's just share one database between all services to keep things simple." Explain why this creates coupling and what the alternative is.
  2. You have 12 microservices. Some teams want monorepo, others want polyrepo. What questions do you ask to decide?
  3. The order service calls the user service synchronously. What happens if the user service is down? What alternatives exist?

Navigation: <- 6.2 Overview | 6.2.b — API Gateway Pattern ->