Episode 3 — NodeJS MongoDB Backend Architecture / 3.13 — Production Project Structure
Interview Questions: Production Project Structure (Episode 3)
How to use this material (instructions)
- Read
3.13.athrough3.13.f. - Answer aloud, then compare below.
- Pair with
3.13-Exercise-Questions.md.
Beginner (Q1--Q4)
Q1. How do you structure a production Express.js project? Walk me through the folder layout.
Why interviewers ask: Shows whether you have worked on real projects beyond single-file tutorials; reveals understanding of separation of concerns.
Model answer:
A production Express project separates concerns into distinct folders inside src/. The standard layout is:
project-root/
├── src/
│ ├── config/ -- DB connection, env parsing, constants
│ ├── controllers/ -- Thin request handlers (parse req, call service, send res)
│ ├── middleware/ -- Auth, error handling, validation, rate limiting
│ ├── models/ -- Mongoose schemas and model exports
│ ├── routes/ -- URL-to-controller mapping, route-level middleware
│ ├── services/ -- Business logic (framework-agnostic, testable)
│ ├── utils/ -- Pure helpers (ApiError, ApiResponse, asyncHandler)
│ ├── validators/ -- Joi/Zod input validation schemas
│ └── app.js -- Express setup (middleware chain, routes, error handler)
├── server.js -- Entry point: connects DB, binds port
├── .env / .env.example
├── .gitignore
└── package.json
The key separation is app.js vs server.js. app.js configures Express and exports the app object -- it knows nothing about ports or databases. server.js connects the database and starts the HTTP listener. This split lets tests import app with supertest without binding a port, and it lets serverless platforms (AWS Lambda) wrap app without a listener.
Within the request flow, each layer has one job: routes map URLs, middleware handles cross-cutting concerns, controllers parse and respond, services contain business logic, and models define data. Controllers should be thin -- no database queries, no business rules. Services are framework-agnostic and testable in isolation.
Q2. Why do we use environment variables instead of hardcoding configuration values?
Why interviewers ask: Fundamental to secure, deployable applications; tests understanding of the 12-Factor App methodology.
Model answer:
Hardcoded values create three problems: security risk (secrets like database passwords and JWT keys are visible to anyone with repo access), inflexibility (changing a config value requires a code change and redeployment), and environment coupling (the same code cannot run in development, staging, and production without modification).
Environment variables solve all three. Secrets stay in .env files (never committed) or in the hosting platform's secret management. Each environment (dev, staging, prod) has its own variables, so the same code runs everywhere. The dotenv package loads .env into process.env at startup.
Best practice is to centralise all config in a src/config/index.js module that reads process.env, casts types (strings to numbers), provides defaults, and validates required variables at startup. This "fail fast" pattern crashes immediately with a clear message if MONGODB_URI or JWT_SECRET is missing, rather than failing mysteriously later.
// src/config/index.js
const config = {
port: parseInt(process.env.PORT, 10) || 3000,
mongoUri: process.env.MONGODB_URI,
jwt: { secret: process.env.JWT_SECRET },
};
const required = ['MONGODB_URI', 'JWT_SECRET'];
const missing = required.filter((k) => !process.env[k]);
if (missing.length) throw new Error(`Missing env vars: ${missing.join(', ')}`);
module.exports = config;
This follows Factor III of the 12-Factor App: "Store config in environment variables."
Q3. What naming conventions do you follow for files in a Node.js project?
Why interviewers ask: Consistency signals professionalism; case-sensitivity bugs are a real production issue.
Model answer:
I use kebab-case for all file names with a suffix indicating the file's role: user.controller.js, auth.middleware.js, post.model.js, user.routes.js. This convention is safer than PascalCase or camelCase because macOS and Windows file systems are case-insensitive by default, while Linux (where code deploys) is case-sensitive. A file named User.js locally works fine, but a require('./user') call fails on the production Linux server. kebab-case avoids this entirely because there is no casing ambiguity.
The suffix pattern (.controller.js, .service.js, .model.js) makes every file's purpose clear at a glance -- you can tell what auth.middleware.js does without opening it. When you have 20 tabs open in your editor, this matters.
For commit messages I follow Conventional Commits: feat(auth): add password reset endpoint, fix(user): correct email validation, chore(deps): update mongoose to v8. This enables automated changelogs and semantic versioning.
Q4. What is the .gitignore file and what should it contain in a Node.js project?
Why interviewers ask: A missing or incomplete .gitignore is one of the most common causes of security incidents and repository bloat.
Model answer:
.gitignore tells Git which files to exclude from version control. For a Node.js project, it should contain:
- Dependencies:
node_modules/(hundreds of megabytes, reproducible frompackage.json) - Environment files:
.env,.env.local,.env.*.local(contain secrets) - Logs:
logs/,*.log,npm-debug.log* - Build output:
dist/,build/,coverage/ - OS files:
.DS_Store,Thumbs.db - IDE files:
.vscode/,.idea/ - Upload content:
public/uploads/*with!public/uploads/.gitkeepto preserve the directory structure
The .gitignore should be the first file committed -- before any code. If .env is accidentally committed, git rm --cached .env removes it from tracking but the secrets remain in Git history. In that case, you must rotate all exposed secrets immediately (change passwords, regenerate keys).
Intermediate (Q5--Q8)
Q5. What is PM2 and why would you use it instead of node server.js in production?
Why interviewers ask: Shows production deployment experience; distinguishes developers who have actually shipped apps.
Model answer:
PM2 is a production process manager for Node.js that solves five problems node server.js cannot:
- Auto-restart on crash: If the process throws an unhandled exception, PM2 restarts it automatically.
- Cluster mode: Node.js is single-threaded. PM2 spawns multiple worker processes (one per CPU core) using the built-in
clustermodule and load-balances with round-robin.pm2 start server.js -i maxuses all cores. - Zero-downtime reloads:
pm2 reloadgracefully replaces workers one by one during deployments -- no dropped connections. - Log management: PM2 captures stdout/stderr into log files with timestamps and supports log rotation.
- Boot persistence:
pm2 startup+pm2 savegenerates a system init script so PM2 and your apps auto-start after server reboot.
For repeatable configuration, I use ecosystem.config.js:
module.exports = {
apps: [{
name: 'blog-api',
script: './server.js',
instances: 'max',
exec_mode: 'cluster',
autorestart: true,
max_memory_restart: '1G',
env_production: { NODE_ENV: 'production', PORT: 8080 },
}],
};
Start with pm2 start ecosystem.config.js --env production.
Q6. Explain the asyncHandler utility and why it exists.
Why interviewers ask: Reveals understanding of Express error handling, Promises, and middleware patterns.
Model answer:
Every async Express route handler needs to catch rejected promises and forward them to the global error handler via next(err). Without a utility, you write try/catch in every single handler:
const getUser = async (req, res, next) => {
try {
const user = await User.findById(req.params.id);
res.json(user);
} catch (error) {
next(error);
}
};
asyncHandler eliminates this repetition:
const asyncHandler = (fn) => (req, res, next) => {
Promise.resolve(fn(req, res, next)).catch(next);
};
It takes an async function, returns a new function that Express calls with (req, res, next), wraps the execution in Promise.resolve().catch(), and if the function throws or returns a rejected promise, the error is forwarded to next() automatically. Now handlers are clean:
const getUser = asyncHandler(async (req, res) => {
const user = await User.findById(req.params.id);
if (!user) throw new ApiError(404, 'User not found');
res.json(new ApiResponse(200, user, 'User fetched'));
});
Without this (and without try/catch), a rejected promise becomes an unhandled promise rejection -- the client hangs, Express never sends a response, and in Node.js 15+ the process may crash.
Q7. How do ApiError and ApiResponse classes create consistent API behaviour?
Why interviewers ask: Tests understanding of error-handling architecture and API design patterns.
Model answer:
Without standard classes, every endpoint invents its own response shape: { error: "..." }, { message: "..." }, { data: ... }. Frontend developers cannot write a generic API client because every response is different.
ApiError extends Error with statusCode, an errors array for field-level details, and an isOperational flag:
class ApiError extends Error {
constructor(statusCode, message = 'Something went wrong', errors = []) {
super(message);
this.statusCode = statusCode;
this.errors = errors;
this.success = false;
this.isOperational = true; // expected error, not a bug
Error.captureStackTrace(this, this.constructor);
}
}
Operational errors (404, 409, 400) show their real message to the client. Programming errors (TypeError, ReferenceError) have isOperational = false and the global error handler returns a generic "Something went wrong" to avoid leaking internals.
ApiResponse standardises success responses:
class ApiResponse {
constructor(statusCode, data, message = 'Success') {
this.statusCode = statusCode;
this.data = data;
this.message = message;
this.success = statusCode < 400;
}
}
Now every success response is { statusCode, data, message, success: true } and every error is { statusCode, message, success: false }. The global error handler catches Mongoose-specific errors (CastError, code 11000, ValidationError) and JWT errors, normalises them into the same shape, and in development includes the stack trace.
Q8. What is the difference between ESLint and Prettier, and how do you combine them?
Why interviewers ask: Code quality tooling is a daily concern on any team; understanding the distinction shows maturity.
Model answer:
ESLint is a static analysis tool that finds bugs and code quality issues: unused variables, == instead of ===, missing await, unreachable code. It is configurable -- you choose which rules are errors, warnings, or off.
Prettier is an opinionated code formatter that enforces consistent style: indentation, quote style, semicolons, line wrapping. It does not find bugs.
They overlap on formatting rules, which causes conflicts. Two packages resolve this:
eslint-config-prettierdisables all ESLint rules that conflict with Prettiereslint-plugin-prettierruns Prettier as an ESLint rule
In .eslintrc.js, plugin:prettier/recommended must be the last entry in extends so it overrides everything above it:
extends: ['eslint:recommended', 'plugin:prettier/recommended']
The developer workflow is: write code -> VS Code ESLint shows inline errors -> save file (Prettier auto-formats) -> stage and commit -> Husky pre-commit hook triggers lint-staged -> lint-staged runs eslint --fix and prettier --write on staged files only -> commitlint validates the commit message -> commit succeeds or is blocked.
Advanced (Q9--Q12)
Q9. Explain the 12-Factor App methodology and which factors apply to a Node.js backend.
Why interviewers ask: Tests architectural thinking beyond code; shows you understand deployment, operations, and scalability.
Model answer:
The 12-Factor App is a set of principles for building modern, portable, scalable web applications. The most relevant factors for a Node.js backend:
Factor III -- Config: Store config in environment variables, not in code. We use .env + dotenv + a centralised config/index.js. Config that varies between deploys (credentials, URLs) goes in env vars. Config that does not vary (Express middleware order, route definitions) stays in code.
Factor IV -- Backing services: Treat databases, caches, and email services as attached resources identified by URLs in env vars. Swapping from local MongoDB to Atlas means changing one variable, not rewriting code.
Factor V -- Build, release, run: Strictly separate stages. npm install + npm run build is the build stage. Adding env vars creates a release. pm2 start is the run stage.
Factor X -- Dev/prod parity: Keep environments as similar as possible. Same Docker image, different env vars. Do not use SQLite locally and PostgreSQL in production.
Factor XI -- Logs: Treat logs as event streams. Write to stdout/stderr and let the platform (PM2, Docker, CloudWatch) handle routing. Never write to files from within the app in a containerised environment.
This methodology guides decisions about configuration, deployment, and operations. When an interviewer asks "Why do you use environment variables?" or "How do you handle different environments?" the answer is rooted in 12-Factor.
Q10. How do you handle errors in a production Express.js application? Walk me through the entire error flow.
Why interviewers ask: Error handling architecture is the most telling sign of production experience; poor error handling is the number-one cause of mysterious production failures.
Model answer:
The error flow has four layers:
1. Throwing errors in services: Business logic throws ApiError instances with specific status codes:
if (!user) throw new ApiError(404, 'User not found');
if (duplicate) throw new ApiError(409, 'Email already exists');
2. Catching in asyncHandler: The wrapper catches any rejected promise and calls next(err):
const asyncHandler = (fn) => (req, res, next) =>
Promise.resolve(fn(req, res, next)).catch(next);
3. Global error handler middleware: Registered last in app.js with four parameters (err, req, res, next). It normalises errors:
- Mongoose
CastError-> 400 "Invalid ID" - Mongoose
code 11000-> 409 "Duplicate value" - Mongoose
ValidationError-> 400 with field messages JsonWebTokenError-> 401 "Invalid token"TokenExpiredError-> 401 "Token expired"
For operational errors (isOperational: true), it sends the real message. For programming errors, it sends "Something went wrong" and logs the full error. In development, the stack trace is included in the response.
4. Unhandled rejections and exceptions: As a safety net, server.js listens for process.on('unhandledRejection') and process.on('uncaughtException'), logs them, and shuts down gracefully.
This architecture means no route handler ever needs a try/catch, every error reaches the same handler, and the client always receives a predictable JSON shape.
Q11. How would you set up a CI/CD pipeline that runs API tests using Postman and Newman?
Why interviewers ask: Bridges the gap between local development and deployment automation; tests DevOps awareness.
Model answer:
The pipeline needs three artifacts: the Postman collection (exported as JSON), an environment file (with test-specific variables), and a Newman configuration. In a GitHub Actions workflow:
- Check out code, install Node.js dependencies with
npm ci. - Start the server in the background with test environment variables (
NODE_ENV=test, test database URI from GitHub Secrets). - Wait for the server to be ready using
npx wait-on http://localhost:3000/health. - Install Newman:
npm install -g newman newman-reporter-htmlextra. - Run the collection:
newman run ./postman/Collection.json --environment ./postman/Test.json --reporters cli,htmlextra --reporter-htmlextra-export report.html --bail. - Upload the HTML report as a build artifact with
actions/upload-artifact.
The --bail flag stops on the first failure so the pipeline fails fast. The test collection should follow a workflow order: register -> login (save token) -> CRUD operations -> cleanup. Environment variables like {{baseUrl}} and {{token}} make the same collection work against local, staging, and production.
For production, you can also run Newman as a post-deployment smoke test -- hit critical endpoints after deployment and alert the team if anything fails.
Q12. How do you configure CORS properly for a production API that serves multiple frontend clients?
Why interviewers ask: CORS misconfiguration is a common security vulnerability; tests understanding of browser security and production deployment.
Model answer:
In development, app.use(cors()) allows all origins -- convenient but dangerous in production because any website could call your API. For production, configure a whitelist:
const allowedOrigins = process.env.CORS_ORIGIN.split(',').map(o => o.trim());
// .env: CORS_ORIGIN=https://myapp.com,https://admin.myapp.com
const corsOptions = {
origin: function (origin, callback) {
if (!origin) return callback(null, true); // allow non-browser requests
if (allowedOrigins.includes(origin)) return callback(null, true);
callback(new ApiError(403, `Origin ${origin} not allowed by CORS`));
},
credentials: true,
methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'],
allowedHeaders: ['Content-Type', 'Authorization'],
exposedHeaders: ['X-Total-Count'],
maxAge: 86400,
};
Key details: !origin allows requests with no origin header -- this includes mobile apps, Postman, and server-to-server calls, which should not be blocked by CORS (CORS is a browser-only mechanism). credentials: true is needed if you send cookies or Authorization headers. maxAge: 86400 caches the preflight response for 24 hours, reducing the number of OPTIONS requests.
The whitelist comes from an environment variable, so adding a new frontend domain is a config change, not a code change -- following the 12-Factor principle.
Quick-fire
| # | Question | One-line |
|---|---|---|
| 1 | Layer-based vs feature-based | Layer = folders by role; feature = folders by domain |
| 2 | app.js vs server.js | app = Express config; server = DB + port |
| 3 | kebab-case file names why? | Avoids case-sensitivity bugs between macOS and Linux |
| 4 | What goes in .env.example? | Every required variable with placeholder values, no secrets |
| 5 | pm2 reload vs pm2 restart | reload = zero-downtime; restart = kill + start |
| 6 | isOperational in ApiError | true = expected error (show message); false = bug (generic message) |
| 7 | asyncHandler eliminates | Repetitive try/catch in every async route handler |
| 8 | ESLint finds what? | Bugs: unused vars, ==, missing await |
| 9 | Prettier fixes what? | Style: indentation, quotes, semicolons, line wrapping |
| 10 | Newman is? | CLI runner for Postman collections, used in CI/CD pipelines |
<- Back to 3.13 -- README