Episode 4 — Generative AI Engineering / 4.17 — LangChain Practical
4.17 — Exercise Questions: LangChain Practical
Practice questions for all five subtopics in Section 4.17. Mix of conceptual, code-writing, and hands-on tasks.
How to use this material (instructions)
- Read lessons in order —
README.md, then4.17.a->4.17.e. - Answer closed-book first — then compare to the matching lesson.
- Build the code examples — run them locally with your API key.
- Interview prep —
4.17-Interview-Questions.md. - Quick review —
4.17-Quick-Revision.md.
4.17.a — Introduction to LangChain (Q1-Q8)
Q1. Define LangChain in one sentence. What problem does it solve that raw API calls do not?
Q2. Name the six core building blocks of LangChain (prompt template, model, output parser, retriever, tool, memory, agent). Describe what each one does in one sentence.
Q3. What does the Runnable interface provide? Name the four key methods every Runnable supports.
Q4. What three packages do you need to install for a basic LangChain project using OpenAI? Write the npm install command.
Q5. Your team is building a simple API that takes a prompt and returns a completion from GPT-4o. There is no memory, no tools, no retrieval. Should you use LangChain or the raw OpenAI SDK? Justify your answer.
Q6. Explain the difference between langchain, @langchain/openai, @langchain/core, and @langchain/community. When would you import from each?
Q7. What is LangSmith? Is it a library or a platform? What environment variables do you set to enable it?
Q8. Compare LangChain agents and LangGraph. When would you use each?
4.17.b — Chains and Prompt Templates (Q9-Q18)
Q9. What is the difference between PromptTemplate and ChatPromptTemplate? Which should you use for modern chat-based LLMs and why?
Q10. Write a ChatPromptTemplate.fromMessages() call that creates a prompt with a system message (role: "expert chef"), a conversation history placeholder, and a user message with a {question} variable.
Q11. What happens if you call template.format() with a missing variable? Is this a runtime error or a silent failure?
Q12. Explain the three-stage pipeline pattern: prompt -> model -> output parser. What does each stage input and output?
Q13. Code task: Write a chain that takes a {language} and {topic} variable, generates a code example in that language about that topic, and returns the result as a string. Use LCEL pipe syntax.
Q14. What is StringOutputParser vs JsonOutputParser? When would you use each?
Q15. Explain what .withStructuredOutput(zodSchema) does. How does it differ from using JsonOutputParser?
Q16. Code task: Build a sequential chain that: (1) takes an article, (2) extracts 5 keywords, (3) generates a tweet using those keywords. Show the full LCEL code.
Q17. How does chain.batch() work? What is the maxConcurrency option?
Q18. Your summarization chain sometimes returns "I cannot summarize this" instead of a summary. How would you add post-processing to detect this and retry with a different prompt?
4.17.c — Tools and Memory (Q19-Q28)
Q19. What three properties must every LangChain tool have? Why is the description property the most important?
Q20. Write a custom tool using DynamicStructuredTool that accepts a city (string) and unit ("celsius" or "fahrenheit") parameter and returns mock weather data. Include a proper Zod schema.
Q21. Your agent keeps calling the wrong tool. The search tool has description: "Search for things" and the calculator has description: "Do calculations". Rewrite both descriptions to be more specific and guide the agent correctly.
Q22. Explain why tools should return error messages as strings rather than throwing exceptions. What happens if a tool throws inside an AgentExecutor?
Q23. Compare BufferMemory, ConversationBufferWindowMemory, ConversationSummaryMemory, and VectorStoreMemory. For each, state: (a) how it stores history, (b) token usage pattern, (c) best use case.
Q24. A chatbot using BufferMemory crashes after 80 turns with a "context length exceeded" error. Diagnose the problem and propose two different solutions.
Q25. Code task: Write a chatbot function using RunnableWithMessageHistory that supports multiple user sessions (each identified by a sessionId). Show how two different users have independent conversation histories.
Q26. What is MessagesPlaceholder and where does it go in a ChatPromptTemplate? What happens if you forget to include it in a chain with memory?
Q27. Your production chatbot needs to survive server restarts. The current implementation uses in-memory ChatMessageHistory. Describe two options for persistent storage and the tradeoffs.
Q28. Design a memory strategy for a customer support chatbot that handles sessions lasting 100+ turns. The bot needs to remember the customer's name and order details throughout but does not need verbatim recall of every message. Which memory type(s) would you combine?
4.17.d — Working with Agents (Q29-Q36)
Q29. Explain the think-act-observe loop in your own words. How does an agent decide when to stop looping?
Q30. What is the difference between a chain and an agent? Give one scenario where each is the better choice.
Q31. What is agent_scratchpad? Why is it included in the prompt template as a MessagesPlaceholder?
Q32. Compare OpenAI Tools Agent and ReAct Agent. How does each agent type communicate tool calls to LangChain? Which is more reliable and why?
Q33. Your agent enters an infinite loop — it keeps calling the search tool with the same query. List three possible causes and a fix for each.
Q34. Code task: Create an agent with two tools: (1) a get_stock_price tool that returns a mock price for a ticker symbol, and (2) a calculator tool. Set up the AgentExecutor with maxIterations: 5, verbose: true, and handleParsingErrors: true. Show the full code.
Q35. Explain the returnIntermediateSteps option. When would you use it in production?
Q36. Your agent has 5 tools available but the user's question only requires 1 tool. Does the agent always call all 5 tools? How does it decide which to call?
4.17.e — LCEL Overview (Q37-Q46)
Q37. What does LCEL stand for? In one sentence, what is its purpose?
Q38. Rewrite this legacy code using LCEL pipe syntax:
const chain = new LLMChain({ llm: model, prompt: prompt });
const result = await chain.call({ input: 'Hello' });
Q39. Explain the difference between RunnableSequence, RunnableParallel, and RunnableLambda. Give a one-line use case for each.
Q40. Code task: Build an LCEL chain that analyzes a text by running three analyses in parallel (sentiment, topic extraction, language detection) and then combines the results into a single report object.
Q41. How does streaming work in LCEL? If you call chain.stream() on a prompt.pipe(model).pipe(parser) chain, which component actually produces the stream?
Q42. What is RunnableBranch? Write pseudocode for a routing chain that sends coding questions to a code expert chain, math questions to a math expert chain, and everything else to a general assistant chain.
Q43. Explain .withFallbacks(). Design a fallback strategy for a production API that tries GPT-4o first, then GPT-4o-mini, then Claude. What happens if all three fail?
Q44. What is RunnablePassthrough and when would you use it? Give an example where you need to pass the original input through alongside processed output.
Q45. How does RunnablePassthrough.assign() work? Write a chain that takes { text: "..." } as input, adds a wordCount field, and adds a summary field from an LLM, then passes all three to the next step.
Q46. Design task: Design an LCEL pipeline for a document processing system that: (1) classifies the document type (invoice, resume, contract), (2) routes to a specialized extraction chain based on type, (3) validates the extraction with a Zod schema, (4) falls back to a general extraction chain if validation fails. Show the pipeline structure (pseudocode is fine).
Answer Hints
| Q | Hint |
|---|---|
| Q1 | Composable components, provider abstraction, built-in patterns |
| Q3 | .invoke(), .stream(), .batch(), .pipe() |
| Q4 | npm install langchain @langchain/openai @langchain/core |
| Q5 | Raw SDK — overhead of LangChain not justified for a single API call |
| Q9 | PromptTemplate produces a string; ChatPromptTemplate produces messages array |
| Q11 | Runtime error — template validates that all variables are provided |
| Q12 | {variables} -> formatted messages -> AIMessage -> parsed output (string/JSON) |
| Q17 | Runs multiple inputs in parallel; maxConcurrency limits simultaneous calls |
| Q22 | A thrown exception can crash the agent loop; a string error lets the agent retry or adjust |
| Q24 | BufferMemory stores everything — grows without bound. Solutions: WindowMemory or SummaryMemory |
| Q29 | Think (LLM decides), Act (tool executes), Observe (result fed back), Repeat or Finish |
| Q32 | OpenAI Tools uses native API tool_calls (structured JSON); ReAct uses text parsing (less reliable) |
| Q36 | The agent reads tool descriptions and uses the LLM to decide which tool(s) are relevant |
| Q37 | LangChain Expression Language — pipe-based composition of Runnables |
| Q41 | The model produces the stream; prompt is instant, parser passes chunks through |
| Q43 | If all three fail, the error from the last attempt is thrown to the caller |
<- Back to 4.17 — LangChain Practical (README)