Episode 6 — Scaling Reliability Microservices Web3 / 6.1 — Microservice Foundations

6.1.a -- Monolithic vs Microservices

A monolith is a single deployable unit containing all application logic; microservices split that logic into independently deployable services, each owning its own data and lifecycle.


Navigation README | 6.1.a Monolithic vs Microservices | 6.1.b When Microservices Make Sense | 6.1.c Service Boundaries | 6.1.d Database per Service | 6.1.e Communication Patterns


1. What Is a Monolithic Architecture?

A monolith packages all business capabilities -- user management, billing, notifications, reporting -- inside a single codebase that is built, tested, and deployed as one artifact.

+----------------------------------------------------------+
|                     MONOLITH (Express)                   |
|                                                          |
|  /users   /orders   /products   /payments   /reports     |
|                                                          |
|  +----------------------------------------------------+  |
|  |              Shared PostgreSQL Database             |  |
|  +----------------------------------------------------+  |
+----------------------------------------------------------+

1.1 Characteristics

  • Single codebase -- all modules live in one repository.
  • Single deployment -- one npm run build && npm start deploys everything.
  • Shared database -- every module reads and writes to the same DB instance.
  • In-process communication -- modules call each other via function invocations, not network calls.
  • Shared memory -- caching, configuration, and state are process-local.

1.2 Example: Monolithic Express App

// server.js -- monolithic Express application
const express = require('express');
const app = express();
const { Pool } = require('pg');

const pool = new Pool({ connectionString: process.env.DATABASE_URL });

app.use(express.json());

// ---------- User Module ----------
app.post('/users', async (req, res) => {
  const { name, email } = req.body;
  const result = await pool.query(
    'INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *',
    [name, email]
  );
  res.status(201).json(result.rows[0]);
});

app.get('/users/:id', async (req, res) => {
  const result = await pool.query('SELECT * FROM users WHERE id = $1', [req.params.id]);
  if (result.rows.length === 0) return res.status(404).json({ error: 'Not found' });
  res.json(result.rows[0]);
});

// ---------- Order Module ----------
app.post('/orders', async (req, res) => {
  const { userId, productId, quantity } = req.body;

  // Direct DB access -- checks inventory in the SAME database
  const product = await pool.query('SELECT * FROM products WHERE id = $1', [productId]);
  if (product.rows[0].stock < quantity) {
    return res.status(400).json({ error: 'Insufficient stock' });
  }

  // Single transaction across tables -- easy in a monolith
  const client = await pool.connect();
  try {
    await client.query('BEGIN');
    await client.query(
      'UPDATE products SET stock = stock - $1 WHERE id = $2',
      [quantity, productId]
    );
    const order = await client.query(
      'INSERT INTO orders (user_id, product_id, quantity) VALUES ($1, $2, $3) RETURNING *',
      [userId, productId, quantity]
    );
    await client.query('COMMIT');
    res.status(201).json(order.rows[0]);
  } catch (err) {
    await client.query('ROLLBACK');
    res.status(500).json({ error: 'Order failed' });
  } finally {
    client.release();
  }
});

// ---------- Product Module ----------
app.get('/products', async (req, res) => {
  const result = await pool.query('SELECT * FROM products');
  res.json(result.rows);
});

app.listen(3000, () => console.log('Monolith running on :3000'));

Notice: Everything shares pool, runs in one process, and deploys together.


2. Advantages of a Monolith

AdvantageDetail
SimplicityOne repo, one build, one deployment pipeline
Easy local developmentnpm start and you have the entire system
Simple debuggingStack traces cross module boundaries without network hops
ACID transactionsSingle database means real transactions across all data
Low operational overheadOne server, one log stream, one monitoring target
Fast inter-module callsFunction calls are nanoseconds; HTTP calls are milliseconds
Straightforward testingIntegration tests can exercise the full application in-process

3. Disadvantages of a Monolith

DisadvantageDetail
Scaling bottlenecksYou must scale the entire app even if only one module is hot
Deployment riskA bug in the payment module takes down users, orders, everything
Team coupling30 engineers pushing to the same repo causes merge conflicts and coordination overhead
Technology lock-inStuck with one language, one framework, one runtime
Long build timesAs the codebase grows, CI/CD pipelines slow to a crawl
Blast radiusAn unhandled exception or memory leak crashes the entire system
Difficulty onboardingNew developers must understand the entire codebase

4. What Is a Microservices Architecture?

Microservices decompose the application into small, autonomous services that:

  • Are independently deployable.
  • Own their own database (or data store).
  • Communicate over network protocols (HTTP, gRPC, message queues).
  • Are organised around business capabilities, not technical layers.
 +----------+      +----------+      +-----------+
 |  User    |      |  Order   |      | Inventory |
 |  Service |<---->|  Service |<---->|  Service  |
 |  :3001   |      |  :3002   |      |  :3003    |
 +----+-----+      +----+-----+      +-----+-----+
      |                  |                  |
 +----+-----+      +----+-----+      +-----+-----+
 | Users DB |      | Orders DB|      |Inventory DB|
 | Postgres |      | Postgres |      |  MongoDB   |
 +----------+      +----------+      +-----------+

4.1 Example: Microservices with Express

User Service (port 3001)

// user-service/index.js
const express = require('express');
const { Pool } = require('pg');

const app = express();
app.use(express.json());

const pool = new Pool({ connectionString: process.env.USER_DB_URL });

app.post('/users', async (req, res) => {
  const { name, email } = req.body;
  const result = await pool.query(
    'INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *',
    [name, email]
  );
  res.status(201).json(result.rows[0]);
});

app.get('/users/:id', async (req, res) => {
  const result = await pool.query('SELECT * FROM users WHERE id = $1', [req.params.id]);
  if (result.rows.length === 0) return res.status(404).json({ error: 'Not found' });
  res.json(result.rows[0]);
});

app.listen(3001, () => console.log('User Service on :3001'));

Order Service (port 3002)

// order-service/index.js
const express = require('express');
const axios = require('axios');
const { Pool } = require('pg');

const app = express();
app.use(express.json());

const pool = new Pool({ connectionString: process.env.ORDER_DB_URL });

const USER_SERVICE = process.env.USER_SERVICE_URL || 'http://localhost:3001';
const INVENTORY_SERVICE = process.env.INVENTORY_SERVICE_URL || 'http://localhost:3003';

app.post('/orders', async (req, res) => {
  const { userId, productId, quantity } = req.body;

  try {
    // 1. Verify user exists (network call to User Service)
    const userRes = await axios.get(`${USER_SERVICE}/users/${userId}`);
    if (!userRes.data) return res.status(404).json({ error: 'User not found' });

    // 2. Reserve inventory (network call to Inventory Service)
    const reserveRes = await axios.post(`${INVENTORY_SERVICE}/inventory/reserve`, {
      productId,
      quantity,
    });
    if (!reserveRes.data.reserved) {
      return res.status(400).json({ error: 'Insufficient stock' });
    }

    // 3. Create order in own database
    const order = await pool.query(
      'INSERT INTO orders (user_id, product_id, quantity, status) VALUES ($1, $2, $3, $4) RETURNING *',
      [userId, productId, quantity, 'confirmed']
    );

    res.status(201).json(order.rows[0]);
  } catch (err) {
    console.error('Order creation failed:', err.message);
    res.status(500).json({ error: 'Order failed' });
  }
});

app.listen(3002, () => console.log('Order Service on :3002'));

Inventory Service (port 3003)

// inventory-service/index.js
const express = require('express');
const mongoose = require('mongoose');

const app = express();
app.use(express.json());

mongoose.connect(process.env.INVENTORY_DB_URL);

const ProductSchema = new mongoose.Schema({
  name: String,
  stock: Number,
});
const Product = mongoose.model('Product', ProductSchema);

app.post('/inventory/reserve', async (req, res) => {
  const { productId, quantity } = req.body;
  const product = await Product.findById(productId);
  if (!product || product.stock < quantity) {
    return res.json({ reserved: false });
  }
  product.stock -= quantity;
  await product.save();
  res.json({ reserved: true, remaining: product.stock });
});

app.get('/inventory/:id', async (req, res) => {
  const product = await Product.findById(req.params.id);
  if (!product) return res.status(404).json({ error: 'Not found' });
  res.json(product);
});

app.listen(3003, () => console.log('Inventory Service on :3003'));

Key differences from the monolith:

  • Each service has its own package.json, its own database connection, and its own port.
  • Communication happens over HTTP (axios), not in-process function calls.
  • The Inventory Service uses MongoDB while the others use PostgreSQL -- technology heterogeneity.

5. Side-by-Side Comparison

DimensionMonolithMicroservices
Deployment unitSingle artifactMany independent artifacts
DatabaseShared (one DB)Database per service
ScalingScale everything togetherScale individual services
Team structureFeature teams share codebaseEach team owns service(s)
TechnologyOne stackPolyglot possible
Inter-module callsIn-process (fast)Network (slower, can fail)
Data consistencyACID transactionsEventual consistency (sagas)
Deployment riskHigh (all-or-nothing)Low (isolated failures)
DebuggingSimple stack tracesDistributed tracing needed
Operational costLowHigh (logging, monitoring, orchestration)
Time to market (small team)FastSlow (overhead)
Time to market (large org)Slow (coordination)Fast (independent teams)

6. The Migration Path: Monolith to Microservices

Most successful microservice architectures start as monoliths and evolve. Here is the proven migration strategy:

6.1 The Strangler Fig Pattern

Named after the strangler fig tree that gradually envelops its host:

Phase 1: Monolith handles everything
+-----------------------------+
|         MONOLITH            |
|  Users | Orders | Inventory |
+-----------------------------+

Phase 2: New feature built as a service; proxy routes traffic
+----------------+     +----------------+
|   MONOLITH     |     |  Notification  |
| Users | Orders |     |  Service (new) |
+--------+-------+     +--------+-------+
         |                      |
    +----+----------------------+----+
    |         API Gateway            |
    +---------------------------------+

Phase 3: Extract modules one by one
+----------+  +----------+  +----------+  +-----------+
|  User    |  |  Order   |  | Inventory|  |Notification|
|  Service |  |  Service |  |  Service |  |  Service   |
+----------+  +----------+  +----------+  +-----------+

Phase 4: Decommission the monolith

6.2 Step-by-Step Migration

  1. Freeze the monolith -- no new features in the monolith.
  2. Identify the seam -- choose a module with clear boundaries (often the easiest, not the most important).
  3. Build the new service -- replicate the module's API contract.
  4. Route traffic -- use an API gateway or reverse proxy to redirect requests.
  5. Migrate data -- move relevant tables to the new service's database.
  6. Verify -- run both in parallel; compare outputs (shadow traffic).
  7. Cut over -- remove the old code from the monolith.
  8. Repeat for the next module.

6.3 Common Migration Mistakes

MistakeWhy It Hurts
Big bang rewriteHigh risk; you lose institutional knowledge
Extracting too many services at onceOverwhelms the team with operational complexity
Shared database during migrationCreates hidden coupling; changes in one service break another
No API gatewayDirect service-to-service calls create a tangled mesh
Ignoring data migrationStale or duplicated data causes bugs

7. Real-World Examples

Netflix

  • Started as: A monolithic Java application.
  • Problem: Single deployments took hours; one bug could take down streaming for millions.
  • Solution: Migrated to hundreds of microservices over several years. Each service handles a specific capability (recommendations, billing, streaming, user profiles).
  • Outcome: Teams deploy independently, thousands of times per day.

Amazon

  • Started as: A monolithic C++ application in the early 2000s.
  • Problem: Teams were blocked waiting for each other; deployment cycles were weeks long.
  • Solution: CEO mandate -- "every team will expose their functionality through service interfaces." This became the "two-pizza team" model.
  • Outcome: Led to AWS (they productised their internal infrastructure).

Etsy (Counter-Example)

  • Architecture: Monolith (PHP).
  • Why it works: Strong deployment tooling (50+ deploys/day), relatively focused domain, culture of shared code ownership.
  • Lesson: A well-managed monolith can outperform a poorly managed microservices architecture.

8. The Architectural Spectrum

It is not binary. There is a spectrum:

Monolith -----> Modular Monolith -----> Microservices
   |                   |                      |
Single deploy    Single deploy           Independent deploys
Single DB        Separate modules        Separate databases
Tight coupling   Loose internal coupling Network boundaries

The modular monolith is often the sweet spot for growing teams:

  • Enforce module boundaries with clear interfaces (no reaching into another module's internals).
  • Keep a single deployment for simplicity.
  • Extract to microservices only when a module's scaling or deployment needs diverge.
// Modular monolith -- modules interact through defined interfaces
// modules/users/index.js
class UserModule {
  constructor(db) {
    this.db = db;
  }

  async getUser(id) {
    const result = await this.db.query('SELECT * FROM users WHERE id = $1', [id]);
    return result.rows[0] || null;
  }

  async createUser(name, email) {
    const result = await this.db.query(
      'INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *',
      [name, email]
    );
    return result.rows[0];
  }
}

module.exports = UserModule;

// modules/orders/index.js
class OrderModule {
  constructor(db, userModule) {
    this.db = db;
    this.userModule = userModule; // dependency injected, not imported directly
  }

  async createOrder(userId, productId, quantity) {
    const user = await this.userModule.getUser(userId);
    if (!user) throw new Error('User not found');

    const result = await this.db.query(
      'INSERT INTO orders (user_id, product_id, quantity) VALUES ($1, $2, $3) RETURNING *',
      [userId, productId, quantity]
    );
    return result.rows[0];
  }
}

module.exports = OrderModule;

9. Key Takeaways

  1. A monolith is not bad -- it is the right starting point for most applications. Simplicity has real value.
  2. Microservices trade development simplicity for operational flexibility -- independent scaling, deployment, and technology choice come at the cost of distributed systems complexity.
  3. The Strangler Fig Pattern is the safest way to migrate from monolith to microservices -- incremental, reversible, low risk.
  4. Database separation is the hardest part of the migration. A shared database means you still have a distributed monolith.
  5. Network calls replace function calls -- this introduces latency, partial failure, and the need for retries, timeouts, and circuit breakers.
  6. The modular monolith is an underrated middle ground that gives you many benefits of microservices without the operational overhead.
  7. Team structure drives architecture (Conway's Law) -- if you have two teams, two services is natural. If you have one team, one deployable unit is simpler.
  8. Real-world migrations take years -- Netflix, Amazon, and others migrated gradually. Respect the complexity.

10. Explain-It Challenge

  1. A junior developer says "monoliths are legacy and microservices are modern -- we should always use microservices." How would you respond? Use at least three concrete arguments.

  2. You are consulting for a 5-person startup that has a working monolith serving 1,000 users. The CTO wants to rewrite as microservices "to be ready for scale." What do you advise and why?

  3. Draw (on paper or whiteboard) the Strangler Fig migration for an e-commerce monolith. Show which module you would extract first and explain your reasoning.


Navigation README | 6.1.a Monolithic vs Microservices | 6.1.b When Microservices Make Sense >>