Episode 6 — Scaling Reliability Microservices Web3 / 6.1 — Microservice Foundations
6.1.d -- Database per Service
Each microservice owns its data store exclusively. No other service reads from or writes to it directly. This is the single most important rule of microservices -- and the hardest to implement.
Navigation << 6.1.c Service Boundaries | 6.1.d Database per Service | 6.1.e Communication Patterns >>
1. Why Shared Databases Break Microservices
A shared database creates hidden coupling:
Shared DB Anti-Pattern:
+--------+ +--------+
| Order | | User |
| Service| | Service|
+---+----+ +---+----+
| |
+------+-------+
|
+------+------+
| Shared DB | <-- Both services read/write users table
| | and orders table directly.
| users |
| orders |
| products |
+-------------+
Problems:
1. Schema change in 'users' table can break Order Service.
2. Both services compete for DB connections and locks.
3. Cannot deploy independently -- migration coordination required.
4. Cannot use different DB technologies per service.
5. Cannot scale databases independently.
6. Who "owns" a table? Unclear. Everyone modifies everything.
1.1 The Coupling Chain
// Order Service developer adds an index to speed up order queries:
await pool.query('CREATE INDEX idx_orders_user_id ON orders(user_id)');
// This locks the orders table, causing User Service queries to hang.
// User Service developer renames 'email' to 'email_address':
await pool.query('ALTER TABLE users RENAME COLUMN email TO email_address');
// Order Service breaks because it queries SELECT email FROM users.
// Both teams deploy on the same day:
// User Service migration runs first, Order Service migration fails
// because it depends on the old schema. Rollback? Which one?
2. The Database-per-Service Principle
Correct Pattern:
+--------+ +--------+ +--------+
| Order | | User | |Inventory|
| Service| | Service| | Service |
+---+----+ +---+----+ +---+----+
| | |
+---+----+ +---+----+ +---+----+
| Orders | | Users | |Products|
| DB | | DB | | DB |
|Postgres| |Postgres| | MongoDB|
+--------+ +--------+ +--------+
Rules:
- Order Service ONLY accesses Orders DB.
- User Service ONLY accesses Users DB.
- Inventory Service ONLY accesses Products DB.
- Cross-service data access happens via APIs or events. NEVER via DB queries.
2.1 Implementation Levels
| Level | How | Isolation | Cost |
|---|---|---|---|
| Separate schemas | Same DB engine, different schemas | Logical isolation; shared resources | Low |
| Separate databases | Same DB engine, different database instances | Stronger isolation; separate connections | Medium |
| Separate servers | Different DB servers entirely | Full isolation; independent scaling | High |
| Polyglot persistence | Different DB technologies per service | Technology fit; maximum independence | Highest |
2.2 Polyglot Persistence Example
+----------+ +----------+ +----------+ +----------+
| Catalog | | Order | | Analytics| | Search |
| Service | | Service | | Service | | Service |
+----+-----+ +----+-----+ +----+-----+ +----+-----+
| | | |
+----+-----+ +----+-----+ +----+-----+ +----+-----+
| Postgres | | Postgres | | ClickHouse| |Elastic- |
| (ACID, | | (ACID, | | (columnar,| |search |
| relational) | relational) | analytics)| |(full-text)|
+----------+ +----------+ +-----------+ +----------+
Each service uses the best database for its workload.
3. Data Consistency Challenges
3.1 The Problem: No More ACID Across Services
In a monolith with a shared database:
// Monolith: ACID transaction across tables -- simple and safe
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query('INSERT INTO orders (user_id, total) VALUES ($1, $2)', [userId, total]);
await client.query('UPDATE inventory SET stock = stock - $1 WHERE product_id = $2', [qty, productId]);
await client.query('INSERT INTO payments (order_id, amount) VALUES ($1, $2)', [orderId, amount]);
await client.query('COMMIT');
// ALL or NOTHING. Guaranteed.
} catch (err) {
await client.query('ROLLBACK');
// Clean rollback. No partial state.
}
In microservices with separate databases, this is impossible. You cannot run a transaction across two different databases owned by different services.
// Microservices: NO transaction across services -- this will NOT work
try {
await orderService.createOrder(userId, items); // Succeeds (Order DB)
await inventoryService.reserve(productId, quantity); // Succeeds (Inventory DB)
await paymentService.charge(userId, amount); // FAILS (Payment DB)
// Order exists. Inventory is reserved. But payment failed.
// We are in an INCONSISTENT state.
// There is no "ROLLBACK" that spans all three databases.
} catch (err) {
// Now what? We need to compensate manually.
}
4. Eventual Consistency
Instead of strong consistency (ACID), microservices embrace eventual consistency:
"The system will become consistent eventually. At any given moment, some data may be temporarily inconsistent, but it will converge."
Strong Consistency (Monolith):
Write ---> All reads immediately see the new value.
Latency: Higher (waits for all locks).
Guarantee: What you read is always current.
Eventual Consistency (Microservices):
Write ---> Some reads may see stale data temporarily.
Latency: Lower (no distributed locks).
Guarantee: Given enough time, all reads converge.
Timeline:
T0: Order created in Order DB
T1: Event published: "OrderCreated"
T2: Inventory Service receives event, reserves stock (+50ms)
T3: Payment Service receives event, charges card (+200ms)
T4: All services consistent. Total time: ~250ms.
Between T0 and T4, the system is "eventually consistent."
5. The Saga Pattern
A saga is a sequence of local transactions where each step publishes an event or command that triggers the next step. If a step fails, compensating transactions undo previous steps.
5.1 Choreography-Based Saga
Each service listens for events and reacts independently. No central coordinator.
OrderCreated --> InventoryService (reserves stock)
|
InventoryReserved --> PaymentService (charges card)
|
PaymentSucceeded --> OrderService (confirms order)
If PaymentFailed:
PaymentFailed --> InventoryService (releases stock)
|
InventoryReleased --> OrderService (cancels order)
// order-service/events/publisher.js
const amqp = require('amqplib');
let channel;
async function connectRabbitMQ() {
const connection = await amqp.connect(process.env.RABBITMQ_URL);
channel = await connection.createChannel();
await channel.assertExchange('order-events', 'topic', { durable: true });
}
async function publishEvent(routingKey, data) {
channel.publish(
'order-events',
routingKey,
Buffer.from(JSON.stringify(data)),
{ persistent: true }
);
console.log(`Published ${routingKey}:`, data);
}
module.exports = { connectRabbitMQ, publishEvent };
// order-service/routes/orders.js
const { publishEvent } = require('../events/publisher');
app.post('/orders', async (req, res) => {
const { userId, productId, quantity, amount } = req.body;
// Step 1: Create order in PENDING state (local transaction)
const order = await pool.query(
'INSERT INTO orders (user_id, product_id, quantity, amount, status) VALUES ($1, $2, $3, $4, $5) RETURNING *',
[userId, productId, quantity, amount, 'pending']
);
// Step 2: Publish event to start the saga
await publishEvent('order.created', {
orderId: order.rows[0].id,
userId,
productId,
quantity,
amount,
});
// Return immediately -- order is PENDING, not confirmed
res.status(202).json({ ...order.rows[0], message: 'Order is being processed' });
});
// inventory-service/events/consumer.js
const amqp = require('amqplib');
async function startConsumer(pool) {
const connection = await amqp.connect(process.env.RABBITMQ_URL);
const channel = await connection.createChannel();
await channel.assertExchange('order-events', 'topic', { durable: true });
const q = await channel.assertQueue('inventory-order-events', { durable: true });
await channel.bindQueue(q.queue, 'order-events', 'order.created');
await channel.bindQueue(q.queue, 'order-events', 'payment.failed');
channel.consume(q.queue, async (msg) => {
const event = JSON.parse(msg.content.toString());
const routingKey = msg.fields.routingKey;
try {
if (routingKey === 'order.created') {
// Reserve inventory (local transaction)
const result = await pool.query(
'UPDATE products SET stock = stock - $1 WHERE id = $2 AND stock >= $1 RETURNING *',
[event.quantity, event.productId]
);
if (result.rows.length === 0) {
// Not enough stock -- publish failure
channel.publish(
'order-events',
'inventory.reservation.failed',
Buffer.from(JSON.stringify({ orderId: event.orderId, reason: 'Insufficient stock' })),
{ persistent: true }
);
} else {
// Stock reserved -- publish success
channel.publish(
'order-events',
'inventory.reserved',
Buffer.from(JSON.stringify({
orderId: event.orderId,
userId: event.userId,
amount: event.amount,
})),
{ persistent: true }
);
}
}
if (routingKey === 'payment.failed') {
// COMPENSATING TRANSACTION: release the reserved stock
await pool.query(
'UPDATE products SET stock = stock + $1 WHERE id = $2',
[event.quantity, event.productId]
);
console.log(`Compensated: released ${event.quantity} units for order ${event.orderId}`);
}
channel.ack(msg);
} catch (err) {
console.error('Error processing event:', err);
channel.nack(msg, false, true); // requeue
}
});
}
module.exports = { startConsumer };
5.2 Orchestration-Based Saga
A central Saga Orchestrator directs each step.
+-------------------+
| Saga Orchestrator|
| (Order Saga) |
+--------+----------+
|
+--> Step 1: Tell Inventory to reserve --> success/fail
|
+--> Step 2: Tell Payment to charge --> success/fail
|
+--> Step 3: Tell Order to confirm --> success
|
+--> On failure: run compensations in reverse order
// order-saga-orchestrator.js
class OrderSaga {
constructor(orderService, inventoryService, paymentService) {
this.orderService = orderService;
this.inventoryService = inventoryService;
this.paymentService = paymentService;
this.completedSteps = [];
}
async execute(orderData) {
try {
// Step 1: Create order (pending)
const order = await this.orderService.createOrder(orderData);
this.completedSteps.push({ service: 'order', data: order });
// Step 2: Reserve inventory
const reservation = await this.inventoryService.reserve(
orderData.productId,
orderData.quantity
);
this.completedSteps.push({ service: 'inventory', data: reservation });
// Step 3: Process payment
const payment = await this.paymentService.charge(
orderData.userId,
order.totalAmount
);
this.completedSteps.push({ service: 'payment', data: payment });
// Step 4: Confirm order
await this.orderService.confirmOrder(order.id);
return { success: true, orderId: order.id };
} catch (err) {
console.error('Saga failed at step:', this.completedSteps.length + 1, err.message);
await this.compensate();
return { success: false, error: err.message };
}
}
async compensate() {
// Undo completed steps in reverse order
for (const step of [...this.completedSteps].reverse()) {
try {
switch (step.service) {
case 'payment':
await this.paymentService.refund(step.data.paymentId);
console.log('Compensated: payment refunded');
break;
case 'inventory':
await this.inventoryService.release(
step.data.productId,
step.data.quantity
);
console.log('Compensated: inventory released');
break;
case 'order':
await this.orderService.cancelOrder(step.data.id);
console.log('Compensated: order cancelled');
break;
}
} catch (compensationErr) {
// Log critically -- compensation failure requires manual intervention
console.error(`CRITICAL: Compensation failed for ${step.service}:`, compensationErr);
// In production: alert on-call, write to dead-letter queue
}
}
}
}
module.exports = OrderSaga;
5.3 Choreography vs Orchestration
| Dimension | Choreography | Orchestration |
|---|---|---|
| Coordination | Decentralised (each service reacts) | Centralised (orchestrator directs) |
| Coupling | Lower (services know events, not each other) | Higher (orchestrator knows all services) |
| Visibility | Harder to trace the full flow | Easy to see the full flow in one place |
| Complexity | Simple for 2-3 steps; chaotic for 5+ | Scales better to complex workflows |
| Single point of failure | None | The orchestrator |
| Best for | Simple sagas with few steps | Complex business processes with many steps |
6. Handling Cross-Service Queries
6.1 The Problem
User asks: "Show me all orders with product details and user info."
In a monolith: SELECT o.*, u.name, p.name FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
In microservices: This JOIN does not exist. The data is in 3 databases.
6.2 API Composition Pattern
A dedicated service (or the API Gateway) aggregates data from multiple services:
// api-gateway/routes/orderDetails.js
const axios = require('axios');
const ORDER_SERVICE = process.env.ORDER_SERVICE_URL;
const USER_SERVICE = process.env.USER_SERVICE_URL;
const CATALOG_SERVICE = process.env.CATALOG_SERVICE_URL;
app.get('/order-details/:orderId', async (req, res) => {
try {
// Step 1: Fetch the order
const orderRes = await axios.get(`${ORDER_SERVICE}/orders/${req.params.orderId}`);
const order = orderRes.data;
// Step 2: Fetch related data in parallel
const [userRes, productRes] = await Promise.all([
axios.get(`${USER_SERVICE}/users/${order.userId}`),
axios.get(`${CATALOG_SERVICE}/products/${order.productId}`),
]);
// Step 3: Compose the response
res.json({
order: {
id: order.id,
status: order.status,
totalAmount: order.totalAmount,
createdAt: order.createdAt,
},
user: {
id: userRes.data.id,
name: userRes.data.name,
email: userRes.data.email,
},
product: {
id: productRes.data.id,
name: productRes.data.name,
price: productRes.data.price,
},
});
} catch (err) {
console.error('Failed to compose order details:', err.message);
// Graceful degradation: return what you have
res.status(err.response?.status || 500).json({ error: 'Failed to fetch order details' });
}
});
Trade-offs:
- Multiple network calls (latency).
- If one service is down, the composition fails (or you degrade gracefully).
- Good for simple aggregations; complex for deep joins.
7. CQRS: Command Query Responsibility Segregation
For complex read patterns, separate the write model (commands) from the read model (queries).
Commands (writes) Queries (reads)
+----------------+ +----------------+
| Order Service | | Order Query |
| (write model) | | Service |
+--------+-------+ | (read model) |
| +--------+-------+
| |
+--------+-------+ +--------+-------+
| Orders DB | | Denormalized |
| (normalised) | | Read DB |
| Postgres | | (Elasticsearch |
+--------+-------+ | or Mongo) |
| +----------------+
| ^
| Event Stream |
+-------------------------+
"OrderCreated"
"OrderUpdated"
"OrderCancelled"
7.1 CQRS Implementation
// write-side: order-service/routes/orders.js
// Handles commands (create, update, cancel)
app.post('/orders', async (req, res) => {
const { userId, items } = req.body;
const order = await createOrder(userId, items); // writes to normalised Postgres
// Publish event for the read side to consume
await publishEvent('order.created', {
orderId: order.id,
userId: order.userId,
items: order.items,
totalAmount: order.totalAmount,
status: order.status,
createdAt: order.createdAt,
});
res.status(201).json(order);
});
// read-side: order-query-service/consumer.js
// Listens for events and builds denormalized read models
const { Client } = require('@elastic/elasticsearch');
const esClient = new Client({ node: process.env.ELASTICSEARCH_URL });
async function handleOrderCreated(event) {
// Fetch supplementary data (user name, product names)
const user = await axios.get(`${USER_SERVICE}/users/${event.userId}`);
const productPromises = event.items.map(item =>
axios.get(`${CATALOG_SERVICE}/products/${item.productId}`)
);
const products = await Promise.all(productPromises);
// Build denormalized document
const orderDocument = {
orderId: event.orderId,
userName: user.data.name,
userEmail: user.data.email,
items: event.items.map((item, i) => ({
productId: item.productId,
productName: products[i].data.name,
quantity: item.quantity,
price: item.price,
})),
totalAmount: event.totalAmount,
status: event.status,
createdAt: event.createdAt,
};
// Index in Elasticsearch for fast, flexible queries
await esClient.index({
index: 'orders',
id: event.orderId,
body: orderDocument,
});
}
// read-side: order-query-service/routes/search.js
// Handles queries -- reads from the denormalized store
app.get('/orders/search', async (req, res) => {
const { userId, status, dateFrom, dateTo, q } = req.query;
const query = { bool: { must: [] } };
if (userId) query.bool.must.push({ term: { userId } });
if (status) query.bool.must.push({ term: { status } });
if (q) query.bool.must.push({ multi_match: { query: q, fields: ['userName', 'items.productName'] } });
if (dateFrom || dateTo) {
query.bool.must.push({
range: {
createdAt: {
...(dateFrom && { gte: dateFrom }),
...(dateTo && { lte: dateTo }),
},
},
});
}
const result = await esClient.search({
index: 'orders',
body: { query, sort: [{ createdAt: 'desc' }], size: 20 },
});
res.json(result.hits.hits.map(hit => hit._source));
});
7.2 When to Use CQRS
| Use CQRS When | Do NOT Use CQRS When |
|---|---|
| Read and write patterns are very different | Simple CRUD with similar read/write shapes |
| Read-heavy workload (100:1 read-to-write ratio) | Balanced read/write ratio |
| Complex search/filter/aggregation queries | Simple queries by ID |
| Different scaling needs for reads vs writes | Uniform load |
| Multiple read representations needed | Single read representation |
8. Practical Patterns Summary
+--------------------------------------------------+
| Data Pattern Decision Tree |
| |
| Need data from another service? |
| | |
| +-- At write time? --> Saga (choreography |
| | or orchestration) |
| | |
| +-- At read time? |
| | |
| +-- Simple? --> API Composition |
| | |
| +-- Complex? --> CQRS with |
| denormalized read model |
+--------------------------------------------------+
9. Key Takeaways
- Shared databases are the #1 microservices anti-pattern. They create hidden coupling that defeats the purpose of separate services.
- Database-per-service means data ownership. Each service is the single source of truth for its data.
- ACID transactions across services are impossible. Accept eventual consistency and implement sagas for multi-step business processes.
- Compensating transactions are the "rollback" of microservices. Every saga step must have a compensation step.
- API Composition solves simple cross-service queries. For complex queries, use CQRS with denormalized read models.
- Data duplication is the price of independence. Snapshots, caches, and denormalized views are normal.
- Choreography is simpler for few steps; orchestration scales better for complex workflows.
- CQRS separates reads from writes -- use it when read patterns differ significantly from write patterns.
10. Explain-It Challenge
-
Your team is migrating from a monolith to microservices. The monolith has a
checkoutfunction that creates an order, charges the card, and reduces inventory in a single database transaction. Redesign this using the Saga pattern. Draw the happy path and two failure paths with compensations. -
Explain eventual consistency to a non-technical stakeholder who is worried that "sometimes the data might be wrong." How would you reassure them?
-
A product manager asks for an "order history" page that shows order details, product names, user names, and payment status -- all from different services. Compare the API Composition and CQRS approaches for building this. Which would you choose and why?
Navigation << 6.1.c Service Boundaries | 6.1.d Database per Service | 6.1.e Communication Patterns >>