Episode 9 — System Design / 9.2 — Design Principles
9.2 — Interview Questions: Design Principles
How to use this material:
- Read each question and try answering it out loud before reading the model answer.
- Pay attention to the "Why interviewers ask" section — understand what they're really testing.
- Practice explaining with concrete examples, not just definitions.
- Time yourself — aim for 1-2 minute answers for beginner, 2-3 for advanced.
- The best answers combine theory with practical experience.
Beginner (Q1–Q4)
Q1. What are the SOLID principles? Can you explain each one briefly?
Why interviewers ask: Tests breadth of knowledge on fundamental OOP design principles. A baseline question to gauge whether the candidate has studied design beyond just syntax.
Model answer:
SOLID is an acronym for five design principles that guide you toward writing maintainable, flexible code:
S — Single Responsibility Principle (SRP): A class or module should have one, and only one, reason to change. This means it should serve one stakeholder or business concern. For example, a UserRepository handles data access — it shouldn't also send emails.
O — Open/Closed Principle (OCP): Software entities should be open for extension but closed for modification. You should be able to add new behavior without changing existing, tested code. The Strategy pattern is a classic way to achieve this — new strategies are new classes, not modifications to a switch statement.
L — Liskov Substitution Principle (LSP): Any subtype should be usable wherever its parent type is expected, without breaking the program. The classic violation is Square extends Rectangle — if code sets width and height independently, a Square breaks that assumption.
I — Interface Segregation Principle (ISP): Don't force classes to implement interfaces they don't use. Instead of one fat interface, create multiple focused interfaces so that each implementor only deals with methods it needs.
D — Dependency Inversion Principle (DIP): High-level business logic should not depend on low-level implementation details. Both should depend on abstractions (interfaces). This enables swapping databases, email providers, or any dependency without rewriting business logic.
In practice, SOLID principles overlap. SRP and ISP both promote focus. OCP and DIP both promote abstraction. They work as a system, not individual rules.
Q2. What does DRY mean, and when can following it too strictly cause problems?
Why interviewers ask: Tests whether the candidate understands the nuance beyond "don't repeat code." Many junior developers over-apply DRY, creating premature abstractions.
Model answer:
DRY stands for "Don't Repeat Yourself," but it's fundamentally about knowledge, not code. The principle says that every piece of business knowledge should have a single, authoritative representation. If your password validation rules appear in three routes, that's a DRY violation — change the rules and you must update three places.
However, DRY can be over-applied in two ways:
First, accidental duplication — two code blocks look identical but represent different business concepts. For example, an employee tax calculation and a sales tax calculation might both multiply by 0.30 today, but they change for completely different reasons. Merging them into one function couples unrelated concerns.
Second, premature abstraction — extracting a shared function after seeing duplication twice creates an abstraction you don't fully understand yet. The "Rule of Three" says wait until the third occurrence, because by then you have enough examples to see the real pattern.
The right balance: DRY your business rules and configuration, but don't DRY code that merely looks similar. Coupling unrelated things is worse than a little duplication.
Q3. What is the difference between composition and inheritance? When would you choose each?
Why interviewers ask: Tests understanding of object-oriented design trade-offs. Many codebases suffer from deep inheritance hierarchies, and knowing when to prefer composition is a sign of experience.
Model answer:
Inheritance establishes an "is-a" relationship — a Dog IS an Animal. The child class gets all parent behavior by default. It works well for shallow hierarchies with a genuine taxonomic relationship: React's Component class, or an Error subclass.
Composition establishes a "has-a" or "uses-a" relationship — a Car HAS an Engine. Instead of inheriting behavior, you compose it by holding references to other objects that provide the behavior.
I prefer composition in most cases because:
- Inheritance is rigid. You can't change your parent at runtime. With composition, you can swap an
EmailNotifierfor anSMSNotifierdynamically. - Inheritance creates coupling. Changes to the parent class ripple through every child. With composition, components are independent.
- Inheritance hierarchies explode. If you need combinations of behaviors (a FlyingSwimmingAnimal), inheritance forces you to create exponentially many subclasses. Composition lets you combine behaviors freely.
- JavaScript doesn't support multiple inheritance. You can't extend two classes, but you can compose any number of behaviors.
I use inheritance when: there's a genuine, shallow "is-a" relationship (2-3 levels max), subtypes truly share identity and behavior, or I'm extending a framework that expects it (like Express error handling classes).
Q4. What is YAGNI and how does it relate to KISS?
Why interviewers ask: Tests pragmatism and ability to resist over-engineering — a common problem in real-world teams.
Model answer:
YAGNI (You Ain't Gonna Need It) says: don't build features or abstractions until you actually need them. It fights speculative complexity — code written "just in case" that adds maintenance burden for a scenario that may never arrive. For example, building a multi-tenant architecture when you have one customer, or creating abstract factories when you have two object types.
KISS (Keep It Simple, Stupid) says: choose the simplest solution that works. It fights unnecessary cleverness — a one-liner reduce chain that takes 2 minutes to parse when a simple for loop would be immediately clear.
They're related but distinct:
- YAGNI is about what you build — don't build things you don't need yet.
- KISS is about how you build — build things simply.
Together, they guard against over-engineering. A codebase that follows both principles builds only what's needed (YAGNI) and builds it in the simplest way possible (KISS).
Important caveat: YAGNI applies to features and abstractions, NOT to code quality. You always need tests, error handling, security, and documentation — those are not speculative.
Intermediate (Q5–Q8)
Q5. How would you apply the Open/Closed Principle in a real Node.js/Express application?
Why interviewers ask: Tests ability to translate abstract principles into practical code. OCP is the most actionable SOLID principle for day-to-day development.
Model answer:
Express itself is a brilliant example of OCP. The core Express application is closed for modification — you never edit Express source code. But it's open for extension through middleware.
Here's how I apply OCP in Express applications at three levels:
1. Middleware chain (cross-cutting concerns): Instead of modifying route handlers to add logging, auth, or rate limiting, I write middleware that plugs into the chain:
app.use(cors()); // Extension: CORS handling
app.use(helmet()); // Extension: Security headers
app.use(authenticate); // Extension: Auth check
Each middleware is an independent extension. Adding CSRF protection means adding one app.use() call, not modifying existing handlers.
2. Strategy pattern (business logic): When I have a growing if/else or switch on a type — like payment methods or notification channels — I extract each case into a class implementing a shared interface. Adding a new payment method means creating a new class, not modifying the payment processor.
3. Event-driven extension: For side effects like sending emails after an order, I use an event bus. The order service emits order:placed, and any number of listeners can subscribe. Adding analytics tracking means adding a new listener, not touching the order service.
The key mental model is: whenever I see code that changes every time a new variant is added, that's an OCP violation. I refactor it so new variants are new code, not modified code.
Q6. Explain the Dependency Inversion Principle with a practical example. How does it help with testing?
Why interviewers ask: DIP is the principle most directly connected to testability and architecture quality. Interviewers want to see if you understand why constructor injection matters.
Model answer:
DIP says that high-level modules (business logic) should not depend on low-level modules (database, email, file system). Both should depend on abstractions — interfaces that the high-level module defines.
Here's a practical example. Say I have an OrderService that needs to save orders and send confirmation emails:
// WITHOUT DIP — OrderService depends on concrete implementations
class OrderService {
private db = new PostgresDatabase(); // Hardcoded dependency
private mailer = new SendGridEmailer(); // Hardcoded dependency
}
This is untestable — every test needs a real Postgres database and a real SendGrid account. And switching to MySQL or Mailgun means rewriting OrderService.
// WITH DIP — OrderService depends on abstractions
interface Database { insert(table: string, data: any): Promise<void>; }
interface EmailService { send(to: string, subject: string, body: string): Promise<void>; }
class OrderService {
constructor(private db: Database, private mailer: EmailService) {}
}
Now testing is trivial — I inject mock implementations:
const mockDb = { insert: jest.fn() };
const mockMailer = { send: jest.fn() };
const service = new OrderService(mockDb, mockMailer);
No real database, no real email. Tests are fast, isolated, and deterministic.
For production, I wire up real implementations in a composition root — one file that assembles all dependencies. The business logic never knows or cares which implementations it's using.
Q7. What are design patterns? How do you decide when to use one versus writing simpler code?
Why interviewers ask: Tests maturity — junior developers either ignore patterns or use them everywhere. The interviewer wants to see balanced judgment.
Model answer:
Design patterns are reusable solutions to recurring design problems. They were catalogued by the Gang of Four in 1994, organized into three categories: Creational (how to make objects), Structural (how to compose objects), and Behavioral (how objects communicate).
Patterns give you two things: a proven solution template and a shared vocabulary. When I say "we use the Observer pattern for our event system," the entire team immediately understands the architecture.
But patterns have costs — indirection, abstraction overhead, and cognitive load. My decision framework:
- Do I have a real, recurring problem? If I'm fighting the same structural issue in three places, a pattern is justified.
- Does the pattern's complexity match the problem's complexity? A Factory pattern for two object types is overkill. For fifteen types loaded from config? Justified.
- Will my team recognize the pattern? If the team doesn't know the Visitor pattern, using it creates confusion, not clarity.
- Is the language already providing it? In JavaScript, closures give you Strategy for free. Node modules give you Singleton for free. EventEmitter gives you Observer. Don't build what the language already provides.
I follow the Rule of Three: write simple code the first time, note the similarity the second time, and extract a pattern the third time. This avoids premature abstraction while catching real recurring problems.
Q8. What is a feature flag? How would you implement one?
Why interviewers ask: Tests practical deployment and release engineering knowledge. Feature flags are a critical tool in modern software delivery.
Model answer:
A feature flag is a mechanism to enable or disable functionality at runtime without deploying new code. It decouples deployment (shipping code to production) from release (making functionality available to users).
I would implement a feature flag system with these capabilities:
- Global toggle — on/off for everyone
- Percentage rollout — enabled for N% of users (using a consistent hash of user ID so the same user always gets the same experience)
- User targeting — enabled for specific user IDs, user groups, or user attributes
In the simplest form:
class FeatureFlags {
isEnabled(flag: string, userId?: string): boolean {
const config = this.getFlag(flag);
if (!config || !config.enabled) return false;
if (config.rolloutPercentage && userId) {
return hash(userId) % 100 < config.rolloutPercentage;
}
return config.enabled;
}
}
In route handlers, the flag gates between old and new behavior:
if (features.isEnabled('new-checkout', req.user.id)) {
return newCheckout(req, res);
}
return oldCheckout(req, res);
Key practices: always default flags to OFF, test both code paths, clean up flags after full rollout (they become tech debt if left), and log flag evaluations for debugging.
In production, I'd use a service like LaunchDarkly or Unleash rather than building from scratch, but understanding the underlying mechanism is important.
Advanced (Q9–Q11)
Q9. You join a team with a large Express codebase where every route handler is a 200-line function containing validation, business logic, database queries, and response formatting. How would you approach refactoring this? Which principles would guide your work?
Why interviewers ask: Tests real-world refactoring experience, prioritization skills, and ability to apply multiple principles together in a messy codebase.
Model answer:
This is a classic Separation of Concerns / SRP violation — the "big ball of mud" problem. I would approach this incrementally, not as a big-bang rewrite.
Phase 1 — Add tests for existing behavior (before changing anything). Write integration tests that call the routes and verify the responses. This is your safety net. Without tests, refactoring is just guessing.
Phase 2 — Extract validation (SRP + SoC). Pull validation logic into middleware or dedicated validator functions. This is the lowest-risk extraction because validation is pure (no side effects). Use a schema validation library like Zod or Joi.
Phase 3 — Extract data access into repositories (SRP + DIP). Create repository classes that handle database queries. Route handlers call repositories instead of writing SQL directly. This also makes testing easier — you can mock repositories.
Phase 4 — Extract business logic into services (SRP + DIP). Move calculations, conditional logic, and orchestration into service classes. Services receive repositories via constructor injection. Now business logic is testable without HTTP or database.
Phase 5 — Make controllers thin (SoC). Controllers (route handlers) should only parse HTTP input, call a service, and format the HTTP response. Three lines: parse, delegate, respond.
Principles guiding each step:
- SRP — each layer has one reason to change
- DIP — services depend on repository interfaces, not concrete queries
- SoC — validation, business logic, data access, and HTTP handling are separate
- OCP — once the service layer exists, new features extend it without modifying controllers
I would do this one route at a time, not all at once. Each refactored route ships independently, is tested, and proves the pattern before applying it to the next route.
Q10. How do you balance the tension between SOLID/DRY principles and simplicity (KISS/YAGNI)? Give a concrete example where you chose simplicity over "proper" architecture.
Why interviewers ask: Tests judgment and pragmatism. This separates senior developers who can make context-dependent trade-offs from developers who apply rules dogmatically.
Model answer:
The key insight is that principles are tools, not rules. They can conflict, and the right choice depends on context — team size, project lifespan, domain complexity, and rate of change.
A concrete example: I was building an internal admin tool for a team of 5 users. The tool fetched data from two API endpoints and displayed it in tables. Following "proper" architecture, I should have:
- Created repository interfaces and implementations (DIP)
- Built a service layer with business logic (SRP)
- Created DTOs, mappers, and validators (SoC)
- Used dependency injection with a container
Instead, I wrote simple functions that called fetch, transformed the data inline, and rendered it. Total: about 200 lines. No interfaces, no service layer, no DI container.
Why this was the right call:
- Only 5 users, so bugs are caught fast
- Two engineers maintain it, so the entire codebase fits in your head
- The data sources are unlikely to change
- Time to market mattered more than architectural purity
- The code was still clean — clear naming, error handling, tests for edge cases
When I would NOT make this choice:
- If the tool was growing to serve 50 teams
- If the data sources were likely to change
- If the team was growing beyond 3-4 developers
- If the business logic was complex enough to need unit testing in isolation
My heuristic: apply SOLID proportionally to the system's complexity and expected lifespan. A weekend project gets KISS. A multi-year production system gets SOLID. Everything in between gets judgment.
Q11. Design a plugin architecture for a Node.js application. What should the plugin interface look like? How do you handle plugin dependencies, initialization order, and cleanup?
Why interviewers ask: Tests advanced design skills — ability to design APIs for other developers, think about lifecycle management, and handle real-world concerns like ordering and error handling.
Model answer:
I'd design the plugin system around four concepts: interface, lifecycle, registry, and hooks.
Plugin Interface:
interface Plugin {
name: string;
version: string;
dependencies?: string[]; // Names of plugins this one depends on
init(app: PluginAPI): Promise<void>;
destroy?(): Promise<void>;
}
Plugin API — what the application exposes to plugins:
interface PluginAPI {
// Extension points
registerRoute(method: string, path: string, handler: RequestHandler): void;
registerMiddleware(middleware: RequestHandler, options?: { priority?: number }): void;
// Events
on(event: string, handler: Function): void;
emit(event: string, data: any): void;
// Config
getConfig(key: string): any;
// Service registry — plugins can provide services for other plugins
registerService(name: string, service: any): void;
getService(name: string): any;
}
Initialization Order: I'd use topological sorting on the dependency graph. If Plugin B depends on Plugin A, A initializes first. Circular dependencies are rejected at registration time.
Error Handling: If a plugin fails to initialize, I'd log the error and either skip it (soft failure for optional plugins) or halt startup (hard failure for required plugins). The plugin manager maintains a state machine: registered → initializing → active → destroying → destroyed.
Cleanup: On application shutdown, plugins are destroyed in reverse initialization order. Each plugin's destroy() cleans up connections, timers, and subscriptions.
Key design decisions:
- Plugins interact with the app through a restricted API, not direct access to internals. This protects encapsulation and allows the core to change without breaking plugins.
- Plugins can provide services to other plugins through a service registry, enabling plugin-to-plugin communication without tight coupling.
- Middleware has priority ordering so plugins can control whether their middleware runs before or after others (e.g., auth before business logic).
This design follows OCP (add plugins without modifying the core), DIP (plugins depend on the PluginAPI interface, not the concrete app), and SRP (each plugin handles one concern).
Return to Overview