Episode 3 — NodeJS MongoDB Backend Architecture / 3.18 — Testing Tools
3.18 — Exercise Questions: Testing Tools
Practice your understanding of testing concepts, Jest APIs, Supertest patterns, mocking strategies, E2E testing, and CI/CD integration with these 56 exercises spanning all four subtopics.
How to Use This Material
- Attempt each question on your own first. Write actual code in a scratch file or your IDE -- do not just read the hints.
- Time yourself. Spend 3-5 minutes per conceptual question and 10-15 minutes per coding question.
- Use the hint only when stuck. The hint gives direction, not the full answer. You learn more by struggling first.
- Revisit wrong answers. After checking hints, go back to the relevant subtopic file and re-read the section.
- The answer hints table at the bottom provides one-line summaries -- use it for quick self-checking after you have written your own answer.
3.18.a — Introduction to Testing (Q1-Q12)
Q1. Draw the testing pyramid and label each layer. For each layer, state: (a) the approximate percentage of your total tests, (b) the relative speed, (c) the relative cost to maintain, and (d) one example test for a Node.js Express app.
Hint: Base = unit tests (65-80%, very fast, cheap, e.g., testing a
calculateTotal()function). Middle = integration tests (15-25%, medium speed, e.g.,POST /api/userswith a real database). Top = E2E tests (5-10%, slow, expensive, e.g., Cypress test of the registration flow).
Q2. Explain the difference between TDD and BDD. Which testing tools support BDD-style syntax? Write one example test in BDD style and one in TDD style for the same behavior: "a shopping cart should calculate the correct total for multiple items."
Hint: TDD focuses on implementation (
assert.equal(result, 42)). BDD focuses on behavior using natural language (expect(cart.total).to.equal(42)orit('should calculate the correct total', ...)). Jest uses BDD-styledescribe/it/expect. Mocha + Chai is also BDD. Pure assertion-based testing (assert) is TDD-style.
Q3. Name and define the six types of test doubles: dummy, fake, stub, spy, mock, and fixture. For each, give a concrete Node.js example.
Hint: Dummy: a placeholder argument you pass but never use. Fake: a working implementation that takes shortcuts (e.g., in-memory database). Stub: returns pre-configured values (
mockFn.mockReturnValue(42)). Spy: tracks calls to a real function (jest.spyOn). Mock: pre-programmed with expectations (jest.mock('./module')). Fixture: static test data (const testUser = { name: 'Alice' }).
Q4. Explain the AAA pattern (Arrange-Act-Assert). Write a test for a calculateShipping(weight, destination) function that charges $5 base + $1 per kg for domestic and $5 base + $3 per kg for international. Show all three phases clearly labeled.
Hint: Arrange:
const weight = 3; const destination = 'international';. Act:const cost = calculateShipping(weight, destination);. Assert:expect(cost).toBe(14);(5 base + 3*3 = 14).
Q5. What is regression testing? Describe a scenario where a developer fixes a bug, and without a regression test, the same bug reappears three months later. How does a regression test prevent this?
Hint: Bug: discount code "SAVE20" applies 20% twice when used with a bulk discount. Developer fixes it. Three months later, another developer refactors the discount module and reintroduces the bug. A regression test (
expect(calculateTotal(100, 'SAVE20', 'BULK')).toBe(64)) would catch the reintroduction immediately.
Q6. Compare testing before code (TDD), alongside code, after code, and when fixing bugs. For each approach, state one advantage and one disadvantage. Which approach do you recommend for: (a) a payment processing module, (b) a quick prototype, (c) a bug fix?
Hint: (a) Payment: TDD -- critical logic, must be correct. (b) Prototype: after code -- design is unstable, tests would be rewritten. (c) Bug fix: before fix -- write the regression test first, then fix the bug, then verify the test passes.
Q7. Explain the "cost curve" of bugs. Why does a bug caught in production cost 100x more to fix than a bug caught by a unit test? List at least 5 types of costs incurred by a production bug.
Hint: Costs: (1) developer debugging time, (2) hotfix deployment, (3) customer support inquiries, (4) lost revenue during downtime, (5) reputation damage, (6) potential legal/compliance issues, (7) incident postmortem meetings. A unit test catches the bug in seconds during development. A production bug requires reproducing, debugging, fixing, testing, deploying, and communicating.
Q8. You inherit a codebase with 200 files and zero tests. The CTO gives you one sprint (two weeks) to "add testing." You cannot pause feature work. Write a week-by-week plan for the first month that maximizes impact with minimal disruption.
Hint: Week 1: Set up Jest, CI pipeline, write tests for existing bug fixes. Week 2: Test the 5 most critical API endpoints with Supertest. Week 3: Add unit tests for utility/validation modules. Week 4: Enforce "no new code without tests" rule, set coverage thresholds. Start with the highest-risk code, not the easiest.
Q9. What is the difference between a unit test and an integration test? Give a concrete example of each for a UserService.register(name, email, password) function that: validates input, hashes the password, saves to the database, and sends a welcome email.
Hint: Unit test: mock the database and email service, test that
register()callsbcrypt.hash()with the correct password and callsemailService.send(). Integration test: usemongodb-memory-server, call the realregister(), verify the user is actually in the database with a hashed password.
Q10. Explain what code coverage measures and what it does NOT measure. A team has 95% line coverage but still has production bugs. How is this possible? Give three specific examples.
Hint: Coverage measures which lines/branches/functions were executed, NOT whether the assertions are correct. Examples: (1) a test calls a function but has no assertions -- 100% coverage, zero verification. (2) Tests cover the happy path but miss edge cases (null inputs, empty arrays). (3) Tests mock everything -- they verify mocks, not real behavior.
Q11. Compare Jest, Mocha, and Vitest across five dimensions: built-in features, configuration effort, speed, ESM support, and community size. When would you choose Mocha over Jest? When would you choose Vitest over Jest?
Hint: Mocha over Jest: when you need maximum flexibility, when integrating with non-standard assertion libraries, or in legacy projects already using Mocha. Vitest over Jest: when your project uses Vite as its build tool, when you need native ESM support, or when you want Jest-compatible API with faster execution.
Q12. Define these terms and explain how they relate to each other: test suite, test case, test runner, assertion library, mocking library, and code coverage tool. Which of these does Jest provide built-in?
Hint: Jest provides ALL of them built-in: test runner (executes tests), assertion library (
expect()), mocking library (jest.fn,jest.mock,jest.spyOn), and coverage tool (--coverage). Mocha provides only the test runner; you need Chai (assertions), Sinon (mocking), and nyc/Istanbul (coverage) separately.
3.18.b — Unit Testing with Jest (Q13-Q30)
Q13. Write a complete test file for a MathUtils module with these functions: clamp(value, min, max) (restricts value to range), average(numbers) (calculates mean), and isPrime(n) (checks primality). Include at least 4 tests per function covering normal cases, edge cases, and error cases.
Hint:
clamp: test value within range (returns value), below min (returns min), above max (returns max), min equals max.average: test normal array, single element, empty array (throw?), array with negatives.isPrime: test 2 (true), 1 (false), 0 (false), negative (false), 17 (true), 4 (false).
Q14. Explain the difference between describe, it, test, and expect. Can you nest describe blocks? Can you use test inside describe? Write a test file that demonstrates nested describe blocks with beforeEach at each level and show the exact execution order.
Hint:
describegroups tests,itandtestare identical (aliases),expectcreates an assertion. Yes,describenests. Execution: outerbeforeAll→ (for each test: outerbeforeEach→ innerbeforeEach→ test → innerafterEach→ outerafterEach) → outerafterAll.
Q15. Explain the difference between toBe, toEqual, and toStrictEqual. Write three test cases where: (a) toEqual passes but toBe fails, (b) toEqual passes but toStrictEqual fails, (c) all three pass.
Hint: (a)
expect({a:1}).toEqual({a:1})passes;expect({a:1}).toBe({a:1})fails (different references). (b)expect({a:1, b:undefined}).toEqual({a:1})passes;toStrictEqualfails (undefined property). (c)expect(42).toBe(42)-- all pass for primitives.
Q16. List 10 different Jest matchers and write a test case for each. Include at least one matcher from each category: equality, truthiness, numbers, strings, arrays, objects, and exceptions.
Hint:
toBe,toEqual,toBeTruthy,toBeFalsy,toBeNull,toBeGreaterThan,toBeCloseTo,toMatch,toContain,toHaveProperty,toThrow,toHaveLength,toMatchObject.
Q17. Write tests for an async function fetchWeather(city) that calls an external API. Test: (a) resolves with weather data for a valid city, (b) rejects with "City not found" for an invalid city, (c) rejects with "Network error" when the API is down. Use async/await syntax for (a) and (b), and .resolves/.rejects syntax for (c).
Hint: Mock the HTTP module:
jest.mock('axios').axios.get.mockResolvedValue({ data: { temp: 72 } }). For network error:axios.get.mockRejectedValue(new Error('Network error')). Useawait expect(...).rejects.toThrow('Network error').
Q18. Create a mock function using jest.fn(). Configure it to: (a) return 'first' on the first call, 'second' on the second call, and 'default' on all subsequent calls, (b) track all arguments it was called with, (c) track how many times it was called. Write assertions for all three.
Hint:
const mock = jest.fn().mockReturnValueOnce('first').mockReturnValueOnce('second').mockReturnValue('default'). Call it 4 times.expect(mock).toHaveBeenCalledTimes(4).expect(mock.mock.calls[0][0]).toBe(firstArg).expect(mock.mock.results)tracks return values.
Q19. You have a UserService that depends on a UserRepository (database layer) and an EmailService. Write the complete test file using jest.mock() to mock both dependencies. Test: (a) createUser saves to DB and sends welcome email, (b) createUser throws if email already exists, (c) createUser still saves to DB even if the email service fails, (d) deleteUser removes from DB and does NOT send any email.
Hint:
jest.mock('./userRepository').jest.mock('./emailService'). For (c):emailService.send.mockRejectedValue(new Error('SMTP fail')). Wrap the email call in try/catch in the service. VerifyuserRepository.savewas still called. For (d): verifyemailService.sendwas NOT called usingexpect(...).not.toHaveBeenCalled().
Q20. Explain the difference between jest.fn(), jest.mock(), and jest.spyOn(). When would you use each? Write a code example demonstrating each.
Hint:
jest.fn(): create a standalone mock function (for callbacks, injected dependencies).jest.mock('./module'): replace an entire module with auto-generated mocks (for database, email, external APIs).jest.spyOn(obj, 'method'): wrap a real method to track calls without changing behavior (for logging, analytics, verifying side effects).
Q21. Write a jest.mock() with a custom factory function for an emailService module. The mock should: (a) have sendWelcomeEmail that resolves to { sent: true, messageId: 'mock-123' }, (b) have sendResetEmail that resolves to { sent: true }, (c) have sendBulkEmail that rejects with "Bulk email disabled in test". Write tests that use these mocks.
Hint:
jest.mock('./emailService', () => ({ sendWelcomeEmail: jest.fn().mockResolvedValue({ sent: true, messageId: 'mock-123' }), ... })).
Q22. Explain mockClear, mockReset, and mockRestore. What happens if you call mockRestore on a jest.fn() instead of a jest.spyOn()? Write a test demonstrating why jest.clearAllMocks() in afterEach is a best practice.
Hint:
mockClear: resets call history.mockReset: clears + removes implementations.mockRestore: restores original (only works withspyOn). WithoutclearAllMocks(), a mock's call count from test 1 leaks into test 2, causingtoHaveBeenCalledTimesto give wrong results.
Q23. Explain the four setup/teardown hooks (beforeAll, afterAll, beforeEach, afterEach). Write a test file with two nested describe blocks, each containing two tests, with hooks at every level. Add console.log statements in each hook and test to demonstrate the exact execution order. List the expected order of all log statements.
Hint: The order for two tests in an inner describe: outer
beforeAll→ outerbeforeEach→ innerbeforeEach→ test 1 → innerafterEach→ outerafterEach→ outerbeforeEach→ innerbeforeEach→ test 2 → innerafterEach→ outerafterEach→ innerafterAll→ outerafterAll.
Q24. Write a test for a function that depends on Date.now(). The function isTokenExpired(token) returns true if the token's exp field (Unix timestamp in seconds) is in the past. Use jest.useFakeTimers() to control time. Test with a token that expires in 1 hour, then advance time by 2 hours and verify it is expired.
Hint:
jest.useFakeTimers().jest.setSystemTime(new Date('2025-01-01T12:00:00Z')). Create token withexp: Math.floor(Date.now()/1000) + 3600(1 hour from now).expect(isTokenExpired(token)).toBe(false).jest.advanceTimersByTime(2 * 60 * 60 * 1000).expect(isTokenExpired(token)).toBe(true).jest.useRealTimers().
Q25. What is snapshot testing? Write a test using toMatchSnapshot() for a function formatUserProfile(user) that returns an HTML string. Explain: (a) what happens on the first run, (b) what happens when the output changes, (c) how to update snapshots, (d) when snapshot testing is a bad idea.
Hint: First run creates
__snapshots__/file.test.js.snap. Changed output → test fails with a diff. Update:npx jest --updateSnapshot. Bad idea: for frequently changing output, large objects (noisy diffs), or when you need to verify specific values (snapshots just check "nothing changed").
Q26. Read this Jest coverage report and answer the questions below:
File | % Stmts | % Branch | % Funcs | % Lines |
------------------|---------|----------|---------|---------|
userService.js | 92.3 | 60.0 | 100.0 | 92.3 |
authMiddleware.js | 78.5 | 45.0 | 80.0 | 80.0 |
helpers.js | 100.0 | 100.0 | 100.0 | 100.0 |
(a) Which file needs the most attention? (b) What does 60% branch coverage in userService.js likely mean? (c) helpers.js has 100% everything -- does this guarantee zero bugs? (d) How would you improve authMiddleware.js coverage?
Hint: (a)
authMiddleware.js-- lowest across the board and it is security-critical code. (b) 60% branches means 40% of if/else paths are untested -- likely error handling paths and edge cases. (c) No -- 100% coverage means every line was executed, not that every assertion is meaningful. (d) Write tests for: invalid token, expired token, missing token, wrong role, malformed header.
Q27. Write tests using it.each (parameterized tests) for a convertTemperature(value, from, to) function. Test at least 8 conversions: Celsius to Fahrenheit, Fahrenheit to Celsius, Celsius to Kelvin, and edge cases (absolute zero, boiling point of water, body temperature).
Hint:
it.each([[0, 'C', 'F', 32], [100, 'C', 'F', 212], [32, 'F', 'C', 0], [-273.15, 'C', 'K', 0]])('converts %s %s to %s %s', (value, from, to, expected) => { expect(convertTemperature(value, from, to)).toBeCloseTo(expected); }).
Q28. Explain the difference between it.only, it.skip, describe.only, and describe.skip. Why is it dangerous to commit it.only to your repository? How can you prevent this with a CI check?
Hint:
.onlyruns ONLY that test/block, silently skipping everything else. In CI this means your test suite "passes" with only 1 test running instead of 200 -- catastrophic. Prevention: useeslint-plugin-jestwith theno-focused-testsrule, or add a grep in CI:grep -r "\.only" tests/ && exit 1.
Q29. Write a complete test suite for a CartService class with methods: addItem(item), removeItem(itemId), getTotal(), applyCoupon(code), clear(). The cart should: calculate totals correctly, handle quantity updates, apply percentage and fixed-amount coupons, prevent negative totals, and throw for invalid coupon codes. Write at least 12 test cases.
Hint: Group tests in nested
describeblocks:describe('addItem', ...),describe('removeItem', ...),describe('getTotal', ...),describe('applyCoupon', ...). Test edge cases: add same item twice (increase quantity), remove non-existent item, apply coupon to empty cart, apply two coupons (should it stack?).
Q30. You discover a bug: generateInvoiceNumber() sometimes produces duplicate numbers under high load because it uses Date.now(). Write a failing test that reproduces this bug (call the function twice in rapid succession and check for uniqueness). Then describe how you would fix it and update the test.
Hint:
const a = generateInvoiceNumber(); const b = generateInvoiceNumber(); expect(a).not.toBe(b);-- this may fail becauseDate.now()returns the same millisecond. Fix: append a counter or random suffix. Better fix: use UUID. Updated test: generate 1000 invoice numbers in a loop, put them in aSet, verify the Set size equals 1000.
3.18.c — API Testing with Supertest (Q31-Q45)
Q31. Explain why you must separate app.js from server.js when using Supertest. What happens if app.js calls app.listen() and you try to import it in multiple test files? Write both files correctly.
Hint: If
app.listen()runs on import, every test file that imports the app starts a server on the same port →EADDRINUSEerror. Separate:app.jsexports the Express app (no.listen()),server.jsimports it and calls.listen(PORT). Tests importapp.jsdirectly, and Supertest creates an internal server.
Q32. Write a complete Supertest test for a GET /api/products endpoint that supports: (a) returning all products (200), (b) filtering by category: ?category=electronics, (c) sorting by price: ?sort=price, (d) pagination: ?page=2&limit=5, (e) returning an empty array for a non-existent category (200, not 404), (f) returning 400 for ?page=-1.
Hint: Use
.query({ category: 'electronics' })for query params. Seed 15 products inbeforeEach. Verify response body shape:{ data: [...], pagination: { page, limit, total, totalPages } }. For sorting, check thatdata[0].price <= data[1].price.
Q33. Write Supertest tests for POST /api/users that validate: (a) creates user with valid data (201), (b) returns 400 when name is missing, (c) returns 400 when email is invalid format, (d) returns 400 when password is shorter than 8 characters, (e) returns 409 for duplicate email, (f) does NOT include password in the response body, (g) the response includes a valid _id field.
Hint: For (f):
expect(res.body.data).not.toHaveProperty('password'). For (g):expect(res.body.data._id).toMatch(/^[a-f0-9]{24}$/)(MongoDB ObjectId format).
Q34. Write the complete authentication testing flow using Supertest: (a) register a new user, (b) login and extract the JWT token, (c) access a protected endpoint with the token, (d) get 401 without a token, (e) get 401 with an expired token, (f) get 401 with a malformed token, (g) get 403 when a regular user accesses an admin route.
Hint: For expired token: create a JWT with
expiresIn: '0s'or use a token with a pastexpclaim. For malformed:set('Authorization', 'Bearer not-a-real-jwt'). Chain tests logically:beforeAllregisters and logs in, stores token in aletvariable.
Q35. Set up mongodb-memory-server for integration tests. Write the complete setup.js file. Explain: (a) why you use an in-memory database instead of a real one, (b) why you clear collections in afterEach instead of afterAll, (c) what happens if you forget to close the connection in afterAll.
Hint: (a) Speed (no disk I/O), isolation (no shared state with other developers), no external dependency (CI does not need MongoDB installed). (b)
afterEachensures test isolation -- test 2 does not see test 1's data. (c) Open connection → Jest hangs after tests finish, eventually times out with "open handles" warning.
Q36. Write Supertest tests for PUT /api/users/:id and PATCH /api/users/:id. Explain the difference between PUT and PATCH. Test: (a) PUT replaces the entire resource (200), (b) PATCH updates only specified fields (200), (c) PUT with missing required fields returns 400, (d) PATCH with only name does not clear email, (e) both return 404 for non-existent ID.
Hint: PUT is a full replacement (all fields required). PATCH is a partial update (only provided fields change). For (d): PATCH
{ name: 'New' }→ verify
Q37. Write Supertest tests for DELETE /api/users/:id. Test: (a) returns 200 on success, (b) user is actually removed from the database (verify with User.findById), (c) returns 404 for non-existent ID, (d) returns 400 for invalid ObjectId format, (e) total user count decreases by 1 after deletion.
Hint: For (b):
const deleted = await User.findById(id); expect(deleted).toBeNull(). For (e): count before, delete, count after, expectbefore - 1 === after.
Q38. Write a test helper function getAuthToken(email, password) that registers a user and returns a JWT token. Use it in multiple test files to avoid duplicating authentication logic. Where should this helper file live in your project structure?
Hint: Create
tests/helpers/auth.js. The function: (1) callsPOST /api/auth/register, (2) callsPOST /api/auth/login, (3) returnsres.body.token. Place intests/helpers/and import withrequire('../helpers/auth'). Consider caching tokens to avoid re-registering in every test.
Q39. Write Supertest tests for an endpoint that returns different response formats based on the Accept header. GET /api/users should return JSON by default, XML when Accept: application/xml, and 406 when Accept: text/plain (not supported).
Hint:
.set('Accept', 'application/xml'). Assert:expect(res.headers['content-type']).toMatch(/xml/). For 406:.set('Accept', 'text/plain').expect(406).
Q40. Write Supertest tests for a file upload endpoint POST /api/avatar. Test: (a) successful upload of a JPEG image (200), (b) response includes the file URL, (c) rejects files over 5MB (413), (d) rejects non-image files like .txt (400), (e) works with additional form fields (userId). How do you create test fixture files?
Hint: Use
.attach('avatar', filePath)for file upload and.field('userId', '123')for form fields. Create a small test JPEG intests/fixtures/. For the 5MB test, either create a large file or mock the file size check in the middleware.
Q41. You need to test a rate-limited endpoint that allows 10 requests per minute. Write a test that: (a) sends 10 requests and verifies all return 200, (b) sends the 11th request and verifies it returns 429, (c) checks the Retry-After header, (d) verifies the rate limit resets after the window. How do you handle the time window in tests?
Hint: Loop 10 times with
for. 11th request:.expect(429). Checkres.headers['retry-after']. For time reset: usejest.useFakeTimers()if the rate limiter usesDate.now(), or restart the rate limiter between tests.
Q42. Write Supertest tests for a POST /api/orders endpoint that creates an order. The endpoint: requires authentication, validates that all itemId references exist in the database, calculates the total from item prices, and returns the complete order with populated item details. Test happy path AND: missing auth, non-existent item IDs, and empty items array.
Hint: Seed products in
beforeEach. Get auth token inbeforeAll. For non-existent items: senditemIdvalues that are valid ObjectIds but do not exist in the products collection → expect 400 or 404 with "Item not found".
Q43. How do you test an endpoint that sends a webhook to an external service after processing? The endpoint POST /api/payments charges a card and then sends a webhook to https://partner.example.com/webhook. You cannot call the real webhook URL in tests.
Hint: Mock the HTTP client (axios or node-fetch) using
jest.mock('axios'). Verify the mock was called with the correct URL and payload:expect(axios.post).toHaveBeenCalledWith('https://partner.example.com/webhook', expect.objectContaining({ paymentId: '...' })).
Q44. Write tests for an endpoint that implements cursor-based pagination (GET /api/posts?cursor=abc123&limit=10). Verify: (a) first page returns items and a nextCursor, (b) using nextCursor returns the next page, (c) the last page has nextCursor: null, (d) items are in consistent order across pages, (e) no item appears on two pages.
Hint: Seed 25 posts. Get page 1 (limit 10), extract
nextCursor. Get page 2 with that cursor. Collect all IDs across all pages. Verify:allIds.length === new Set(allIds).size(no duplicates) andallIds.length === 25(all items retrieved).
Q45. Design and write tests for a POST /api/auth/forgot-password endpoint. The endpoint accepts an email, generates a reset token, stores it with an expiry, and "sends" a reset email. Test: (a) returns 200 for existing email, (b) returns 200 for non-existing email (to prevent email enumeration), (c) the reset token is saved in the database, (d) the email service was called, (e) the token expires after 1 hour.
Hint: Mock the email service. For (b): same 200 response whether email exists or not -- this is a security best practice. For (c): query the User model for
resetTokenandresetTokenExpiry. For (e): use fake timers, advance by 61 minutes, attempt to use the token → expect "Token expired" error.
3.18.d — Cross-Browser & End-to-End Testing (Q46-Q56)
Q46. Explain the difference between Cypress and Playwright in terms of architecture. Cypress runs inside the browser, while Playwright controls the browser from outside. What are the practical implications of this difference for: (a) accessing browser APIs, (b) multi-tab testing, (c) network interception?
Hint: (a) Cypress has direct access to
window,document,localStoragefrom test code; Playwright must usepage.evaluate()to run code in the browser context. (b) Cypress cannot natively open multiple tabs; Playwright supportsbrowser.newContext()andcontext.newPage()for multi-tab scenarios. (c) Both support network interception but via different mechanisms:cy.intercept()vspage.route().
Q47. Write a Cypress test for a complete e-commerce checkout flow: (a) visit the products page, (b) add an item to the cart, (c) navigate to the cart, (d) proceed to checkout, (e) fill in shipping info, (f) confirm the order, (g) verify the order confirmation page. Use cy.intercept() to stub the payment API.
Hint: Stub the payment:
cy.intercept('POST', '/api/payments', { statusCode: 200, body: { success: true, transactionId: 'mock-txn-123' } }).as('payment'). Usecy.wait('@payment')before asserting on the confirmation page. Usedata-testidselectors throughout.
Q48. Write a Playwright test that creates two browser contexts (two users) and tests a real-time chat feature. User A sends a message, User B sees it appear. How does Playwright's browser context model make this possible without running two separate test processes?
Hint:
const contextA = await browser.newContext(); const contextB = await browser.newContext();. Each context has its own cookies and session. Login User A oncontextA.newPage(), login User B oncontextB.newPage(). After User A sends a message, useawait expect(bobPage.locator('[data-testid="messages"]')).toContainText('Hello').
Q49. Explain the Page Object Model (POM) pattern. Write a LoginPage and DashboardPage class for Playwright. Then write a test that uses both page objects to test the login flow. Why is POM better than putting selectors directly in tests?
Hint: POM encapsulates selectors and actions in classes. Benefit: when the login form changes, you update ONE class instead of 50 test files.
class LoginPage { constructor(page) { this.emailInput = page.locator('[data-testid="email"]'); } async login(email, pass) { ... } }.
Q50. Write a Cypress custom command cy.login(email, password) that authenticates via the API (not the UI) and stores the token in localStorage. Explain why logging in via the API is faster than logging in via the UI for test setup.
Hint:
Cypress.Commands.add('login', (email, password) => { cy.request('POST', '/api/auth/login', { email, password }).then(res => { window.localStorage.setItem('token', res.body.token); }); }). API login: ~100ms (one HTTP call). UI login: ~3-5 seconds (page load + typing + click + redirect).
Q51. How do you handle flaky E2E tests? List 6 common causes of flakiness and write the fix for each. Include both Cypress and Playwright solutions where they differ.
Hint: (1) Element not ready → auto-wait (Playwright) /
.should('be.visible')(Cypress). (2) Animation → disable CSS transitions in test mode. (3) API race condition →cy.wait('@alias')/page.waitForResponse(). (4) Shared state → reset DB before each test. (5) Timing → never usesleep(), use condition-based waits. (6) Stale selectors → usedata-testidinstead of CSS classes.
Q52. Write a GitHub Actions workflow that runs Playwright tests against three browsers (Chromium, Firefox, WebKit) using a matrix strategy. The workflow should: install dependencies, install browsers, run tests, and upload the HTML report as an artifact. How would you modify this to only run Chromium on PRs and all browsers on merge to main?
Hint:
strategy: matrix: browser: [chromium, firefox, webkit]. Run:npx playwright test --project=${{ matrix.browser }}. For PR-only Chromium: useif: github.event_name == 'pull_request'to conditionally set the matrix, or use separate jobs with differenton:triggers.
Q53. Explain visual regression testing. Write a Playwright test that: (a) captures a full-page screenshot, (b) captures a component screenshot, (c) compares against baselines with a 1% pixel tolerance. How do you update baselines when changes are intentional?
Hint:
await expect(page).toHaveScreenshot('homepage.png', { maxDiffPixelRatio: 0.01 }). Component:await expect(page.locator('[data-testid="header"]')).toHaveScreenshot('header.png'). Update:npx playwright test --update-snapshots. First run creates baselines in*.spec.js-snapshots/.
Q54. You are setting up E2E tests for a project that currently has zero. Write a step-by-step plan: (a) which framework to choose and why, (b) how to structure the test files, (c) which 5 user flows to test first, (d) how to handle test data, (e) how to integrate into CI/CD.
Hint: (a) Playwright for cross-browser + free parallel. (b)
tests/e2e/with page objects intests/pages/. (c) Login, registration, create-a-[main-resource], search/browse, and logout. (d) Seed via API inbeforeEach, reset inafterEach. (e) Separate CI job, runs after unit+integration, uploads artifacts.
Q55. Write a Playwright test that intercepts an API call and returns mock data. Then write the equivalent Cypress test for the same scenario. Compare the syntax of page.route() (Playwright) vs cy.intercept() (Cypress).
Hint: Playwright:
await page.route('**/api/users', route => route.fulfill({ status: 200, body: JSON.stringify({ data: [...] }) })). Cypress:cy.intercept('GET', '/api/users', { statusCode: 200, body: { data: [...] } }). Both achieve the same result with different syntax.
Q56. Design a complete CI/CD testing pipeline for a production application. The pipeline should: (a) run linting and unit tests on every push, (b) run integration tests on every PR, (c) run E2E tests (Chromium only) on every PR, (d) run E2E tests on all browsers when merging to main, (e) run visual regression tests weekly. Draw the pipeline as a flowchart and write the GitHub Actions configuration.
Hint: Use multiple workflow files or jobs with different
on:triggers. PRs:on: pull_request. Merge to main:on: push: branches: [main]. Weekly:on: schedule: cron: '0 0 * * 0'. Useneeds:to create dependencies: unit → integration → e2e.
Answer Hints Table
| Q# | One-Line Answer Hint |
|---|---|
| Q1 | Pyramid: many unit (fast, cheap) → some integration → few E2E (slow, expensive) |
| Q2 | TDD: test-first implementation; BDD: behavior-described in natural language syntax |
| Q3 | Dummy=unused, Fake=shortcut impl, Stub=canned return, Spy=tracks calls, Mock=expectations, Fixture=data |
| Q4 | Arrange: set up data; Act: call function; Assert: verify result |
| Q5 | Regression test = test that verifies a previously-fixed bug stays fixed |
| Q6 | TDD for critical code, alongside for features, after for prototypes, before-fix for bugs |
| Q7 | Bug cost grows exponentially: dev time, hotfix, support, revenue loss, reputation |
| Q8 | Start with CI + bug-fix tests, then critical paths, then coverage thresholds |
| Q9 | Unit: mocked deps, one function. Integration: real DB, real HTTP, multiple layers |
| Q10 | Coverage measures execution, not correctness; tests can cover lines without meaningful assertions |
| Q11 | Jest=batteries-included, Mocha=flexible+legacy, Vitest=fast+ESM+Vite |
| Q12 | Jest provides all six: runner, assertions, mocking, coverage, watch, snapshots |
| Q13 | Test normal, edge (0, negative, empty), and error cases for each function |
| Q14 | describe groups, it/test are aliases, expect asserts; nesting is supported |
| Q15 | toBe=reference, toEqual=deep ignoring undefined, toStrictEqual=deep including undefined |
| Q16 | Equality, truthiness, numbers, strings, arrays, objects, exceptions -- one matcher each |
| Q17 | Mock the HTTP client, test resolve/reject/timeout with async/await and .rejects |
| Q18 | mockReturnValueOnce chains, .mock.calls tracks args, .toHaveBeenCalledTimes counts |
| Q19 | jest.mock both deps, test happy path, duplicate error, email failure, and delete |
| Q20 | fn()=standalone, mock()=replace module, spyOn()=wrap existing method |
| Q21 | Factory function in jest.mock('./module', () => ({...})) |
| Q22 | Clear=reset history, Reset=clear+remove impl, Restore=undo spy (spyOn only) |
| Q23 | Outer beforeAll → (outer beforeEach → inner beforeEach → test → inner afterEach → outer afterEach) × N |
| Q24 | jest.useFakeTimers(), jest.setSystemTime(), jest.advanceTimersByTime() |
| Q25 | Snapshot captures output; first run=baseline; change=fail; --updateSnapshot to accept |
| Q26 | Low branch coverage = untested if/else paths; 100% coverage does not mean zero bugs |
| Q27 | it.each([[input, expected], ...]) for parameterized/table-driven tests |
| Q28 | .only silently skips all other tests; use ESLint rule no-focused-tests |
| Q29 | Nested describes for each method; test normal, edge, error, and interaction cases |
| Q30 | Reproduce with rapid successive calls; fix with UUID or atomic counter |
| Q31 | app.js exports app (no listen); server.js imports and listens; avoids port conflicts |
| Q32 | Test each query param independently and combined; verify response shape and pagination |
| Q33 | Test valid creation, each validation rule, duplicate, no password in response |
| Q34 | Register → login → use token → test without token → expired → malformed → wrong role |
| Q35 | In-memory DB: fast, isolated, no external dependency; clear afterEach for isolation |
| Q36 | PUT=full replace, PATCH=partial update; verify PATCH does not clear unmentioned fields |
| Q37 | Verify deletion in response AND in database; check count change |
| Q38 | tests/helpers/auth.js — register, login, return token; import in any test file |
| Q39 | Set Accept header; test JSON, XML, and 406 for unsupported types |
| Q40 | .attach('field', path) for upload; create small fixture files in tests/fixtures/ |
| Q41 | Loop 10 requests (200), 11th (429), check Retry-After header |
| Q42 | Seed products, get auth token, test valid order + missing auth + invalid items + empty |
| Q43 | jest.mock('axios') to mock the webhook HTTP call; verify payload |
| Q44 | Follow cursor chain through all pages; collect all IDs; verify no duplicates, complete set |
| Q45 | Same 200 for existing/non-existing email; mock email service; verify token in DB |
| Q46 | Cypress=in-browser (direct DOM), Playwright=outside (CDP); impacts tabs, APIs, mocking |
| Q47 | Stub payment with cy.intercept; use cy.wait before assertions |
| Q48 | Two browser.newContext() instances = two independent sessions in one test |
| Q49 | POM: class per page with selectors+actions; update one class instead of many tests |
| Q50 | cy.request('POST', '/api/auth/login') → store token; 100ms vs 3-5s for UI login |
| Q51 | Auto-wait, disable animations, wait for network, reset DB, condition-based waits, data-testid |
| Q52 | Matrix: [chromium, firefox, webkit]; conditional matrix for PR vs merge |
| Q53 | toHaveScreenshot with maxDiffPixelRatio; update with --update-snapshots |
| Q54 | Playwright, page objects, 5 critical flows, API-seeded data, separate CI job |
| Q55 | Playwright: page.route() + route.fulfill(); Cypress: cy.intercept() with response object |
| Q56 | Multi-job pipeline: lint/unit → integration → E2E (Chromium on PR, all on main, visual weekly) |