Episode 2 — React Frontend Architecture NextJS / 2.17 — Understanding Full Stack Frameworks
2.17.a — What Are Full Stack Frameworks?
In one sentence: They unify server and browser concerns in one application system—routing, rendering, and server logic colocated with UI—so teams ship vertical slices faster while still owning security and ops discipline.
Navigation: ← 2.17 Overview · Next → 2026 Landscape
1. Defining “full stack framework” without hype
A full stack framework (in the modern web sense) is a productized application platform that:
- Owns the HTTP boundary — routing, request handling, sometimes middleware/edge.
- Coordinates rendering — server HTML/RSC, client hydration, streaming.
- Embeds server capabilities — mutations via actions, Route Handlers, or loader/action patterns.
- Ships as one codebase — shared modules for validation, types, auth helpers.
It is not automatically “a database inside React.” Persistence is still usually external (Postgres, MongoDB, SaaS APIs).
1.1 “Single system” — what actually becomes one?
| Layer | Often unified in one repo/deploy |
|---|---|
| UI routes + server routes | Yes |
| Build + typecheck | Yes |
| Auth session between UI and server handlers | Often easier |
| Database schema | Sometimes separate service owned by platform team |
Single system usually means one deployable application boundary and one mental model for how requests flow—not that every concern collapses into one process forever.
1.2 Problems these frameworks optimize
| Pain in “SPA + separate API” | Full stack framework angle |
|---|---|
| Duplicate DTO validation | Shared Zod/schemas |
| CORS + cookie complexity | Same-origin server routes |
| SSR glue custom per company | Conventions + docs |
| Two deploy pipelines to coordinate | One pipeline (still can be complex) |
1.3 Costs you must still pay
- Security: server code is privileged; any exposed endpoint is attack surface.
- Complexity: caching, partial rendering, and deployment cold starts become your literacy topics.
- Lock-in (soft): platform-specific APIs creep in unless you wrap them.
1.4 Interview-grade definition
“A full stack web framework gives me first-class server execution next to my UI with shared modules and routing conventions, so I ship features as vertical slices instead of constantly stitching a SPA to a separate REST service.”
1.5 Comparison to “API + web portal” two-repo pattern
Two-repo is still valid at scale with platform boundaries. Frameworks shine when product velocity and small team ownership dominate and cross-repo friction hurts more than modular isolation helps.
1.6 Ownership model
Define who reviews server mutations vs UI. Some teams use CODEOWNERS on app/api, server, or service folders—even inside one repo.
Part 2 — Deep Encyclopedia (What Are Full Stack Frameworks?)
How to use: Each drill = scenario → prompt → model answer → optional code sketch.
E001 — Drill 1: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 1: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #1?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #1, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E002 — Drill 2: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 2: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #2?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #2, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E003 — Drill 3: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 3: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #3?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #3, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E004 — Drill 4: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 4: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #4?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #4, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E005 — Drill 5: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 5: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #5?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #5, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E006 — Drill 6: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 6: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #6?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #6, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E007 — Drill 7: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 7: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #7?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #7, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E008 — Drill 8: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 8: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #8?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #8, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E009 — Drill 9: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 9: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #9?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #9, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E010 — Drill 10: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 10: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #10?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #10, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E011 — Drill 11: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 11: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #11?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #11, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E012 — Drill 12: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 12: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #12?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #12, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E013 — Drill 13: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 13: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #13?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #13, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E014 — Drill 14: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 14: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #14?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #14, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E015 — Drill 15: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 15: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #15?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #15, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E016 — Drill 16: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 16: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #16?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #16, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E017 — Drill 17: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 17: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #17?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #17, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E018 — Drill 18: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 18: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #18?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #18, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E019 — Drill 19: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 19: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #19?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #19, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E020 — Drill 20: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 20: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #20?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #20, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E021 — Drill 21: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 21: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #21?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #21, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E022 — Drill 22: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 22: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #22?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #22, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E023 — Drill 23: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 23: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #23?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #23, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E024 — Drill 24: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 24: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #24?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #24, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E025 — Drill 25: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 25: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #25?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #25, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E026 — Drill 26: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 26: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #26?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #26, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E027 — Drill 27: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 27: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #27?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #27, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E028 — Drill 28: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 28: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #28?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #28, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E029 — Drill 29: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 29: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #29?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #29, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E030 — Drill 30: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 30: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #30?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #30, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E031 — Drill 31: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 31: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #31?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #31, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E032 — Drill 32: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 32: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #32?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #32, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E033 — Drill 33: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 33: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #33?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #33, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E034 — Drill 34: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 34: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #34?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #34, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E035 — Drill 35: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 35: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #35?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #35, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E036 — Drill 36: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 36: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #36?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #36, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E037 — Drill 37: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 37: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #37?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #37, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E038 — Drill 38: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 38: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #38?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #38, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E039 — Drill 39: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 39: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #39?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #39, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E040 — Drill 40: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 40: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #40?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #40, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E041 — Drill 41: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 41: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #41?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #41, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E042 — Drill 42: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 42: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #42?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #42, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E043 — Drill 43: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 43: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #43?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #43, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E044 — Drill 44: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 44: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #44?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #44, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E045 — Drill 45: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 45: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #45?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #45, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E046 — Drill 46: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 46: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #46?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #46, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E047 — Drill 47: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 47: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #47?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #47, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E048 — Drill 48: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 48: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #48?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #48, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E049 — Drill 49: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 49: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #49?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #49, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E050 — Drill 50: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 50: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #50?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #50, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E051 — Drill 51: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 51: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #51?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #51, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E052 — Drill 52: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 52: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #52?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #52, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E053 — Drill 53: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 53: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #53?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #53, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E054 — Drill 54: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 54: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #54?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #54, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E055 — Drill 55: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 55: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #55?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #55, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E056 — Drill 56: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 56: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #56?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #56, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E057 — Drill 57: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 57: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #57?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #57, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E058 — Drill 58: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 58: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #58?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #58, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E059 — Drill 59: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 59: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #59?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #59, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E060 — Drill 60: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 60: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #60?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #60, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E061 — Drill 61: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 61: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #61?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #61, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E062 — Drill 62: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 62: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #62?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #62, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E063 — Drill 63: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 63: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #63?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #63, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E064 — Drill 64: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 64: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #64?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #64, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E065 — Drill 65: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 65: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #65?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #65, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E066 — Drill 66: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 66: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #66?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #66, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E067 — Drill 67: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 67: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #67?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #67, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E068 — Drill 68: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 68: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #68?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #68, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E069 — Drill 69: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 69: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #69?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #69, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E070 — Drill 70: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 70: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #70?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #70, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E071 — Drill 71: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 71: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #71?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #71, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E072 — Drill 72: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 72: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #72?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #72, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E073 — Drill 73: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 73: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #73?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #73, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E074 — Drill 74: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 74: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #74?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #74, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E075 — Drill 75: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 75: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #75?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #75, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E076 — Drill 76: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 76: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #76?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #76, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E077 — Drill 77: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 77: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #77?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #77, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E078 — Drill 78: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 78: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #78?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #78, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E079 — Drill 79: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 79: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #79?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #79, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E080 — Drill 80: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 80: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #80?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #80, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E081 — Drill 81: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 81: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #81?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #81, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E082 — Drill 82: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 82: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #82?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #82, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E083 — Drill 83: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 83: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #83?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #83, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E084 — Drill 84: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 84: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #84?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #84, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E085 — Drill 85: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 85: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #85?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #85, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E086 — Drill 86: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 86: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #86?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #86, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E087 — Drill 87: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 87: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #87?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #87, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E088 — Drill 88: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 88: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #88?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #88, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E089 — Drill 89: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 89: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #89?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #89, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E090 — Drill 90: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 90: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #90?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #90, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E091 — Drill 91: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 91: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #91?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #91, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E092 — Drill 92: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 92: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #92?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #92, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E093 — Drill 93: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 93: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #93?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #93, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E094 — Drill 94: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 94: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #94?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #94, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E095 — Drill 95: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 95: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #95?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #95, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E096 — Drill 96: Rate limits apply at edge vs origin
Scenario: Rate limits apply at edge vs origin — who configures? (variation 96: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #96?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #96, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E097 — Drill 97: Cookie sessions shared between SSR page and API route.
Scenario: Cookie sessions shared between SSR page and API route. (variation 97: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #97?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #97, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E098 — Drill 98: Background jobs still need a queue
Scenario: Background jobs still need a queue — framework does not erase that. (variation 98: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #98?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #98, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E099 — Drill 99: Multi-region data residency with single codebase.
Scenario: Multi-region data residency with single codebase. (variation 99: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #99?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #99, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E100 — Drill 100: Vendor-specific APIs (Vercel KV) vs portable abstractions.
Scenario: Vendor-specific APIs (Vercel KV) vs portable abstractions. (variation 100: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #100?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #100, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E101 — Drill 101: Backend engineer fears 'frontend framework' owns their APIs.
Scenario: Backend engineer fears 'frontend framework' owns their APIs. (variation 101: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #101?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #101, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E102 — Drill 102: Security audit asks where trust boundaries sit in one deployable.
Scenario: Security audit asks where trust boundaries sit in one deployable. (variation 102: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #102?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #102, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E103 — Drill 103: Single git push should deploy UI + server
Scenario: Single git push should deploy UI + server — how to stage safely? (variation 103: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #103?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #103, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E104 — Drill 104: Monolith vs modular monolith vs BFF inside Next.
Scenario: Monolith vs modular monolith vs BFF inside Next. (variation 104: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #104?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #104, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)
E105 — Drill 105: Observability: one trace across RSC + Route Handler + DB.
Scenario: Observability: one trace across RSC + Route Handler + DB. (variation 105: assume 2026 maintenance expectations—security patches, LTS Node, and supply-chain scanning in CI.)
Prompt: What architectural decision do you document, and what measurable outcome validates it for case #105?
Model answer: I separate product goals (time-to-market, SEO, offline, compliance) from organizational constraints (skills, existing services, cloud contracts). I propose the smallest vertical slice through the stack (UI → server → persistence) and measure lead time, defect rate, and p95 latency for that slice. For drill #105, I call out risks (coupling, vendor APIs, unclear ownership) and mitigations (module boundaries, ADRs, hexagonal ports on the server). I avoid treating any framework as magic—it still needs engineering discipline.
Single deployable artifact
├── UI routes / components
├── Server handlers / actions
├── Shared validation + types
└── External services (DB, queue, payments)