Episode 4 — Generative AI Engineering / 4.17 — LangChain Practical

4.17.b — Chains and Prompt Templates

In one sentence: Prompt templates let you build reusable, parameterized prompts with dynamic variables, and chains let you pipe those prompts through models and output parsers in a composable sequence — replacing hardcoded string concatenation with a structured, testable pipeline.

Navigation: <- 4.17.a Introduction | 4.17.c — Tools and Memory ->


1. PromptTemplate: Dynamic Prompt Construction

A PromptTemplate takes a template string with {variables} and produces a formatted string at runtime. This is the simplest prompt type — it produces a single string, not a message array.

import { PromptTemplate } from '@langchain/core/prompts';

// Create a template with two variables
const template = PromptTemplate.fromTemplate(
  'Translate the following {language} code to {targetLanguage}:\n\n{code}'
);

// Format it with values
const formatted = await template.format({
  language: 'Python',
  targetLanguage: 'JavaScript',
  code: 'def greet(name):\n    return f"Hello, {name}!"'
});

console.log(formatted);
// "Translate the following Python code to JavaScript:
//
//  def greet(name):
//      return f"Hello, {name}!""

Why not just use template literals?

// Template literal approach — works, but:
const prompt = `Translate the following ${language} code to ${targetLanguage}:\n\n${code}`;

// Problems:
// 1. Not reusable — you can't serialize it, store it, or pass it around as a component
// 2. Not validatable — no way to check that all required variables are provided
// 3. Not composable — can't pipe it into a chain
// 4. Not traceable — LangSmith can't show you the template vs the filled values

PromptTemplate gives you validation (throws if a variable is missing), serialization (can be saved/loaded as JSON), and composability (plugs into chains via .pipe()).

Input validation

const template = PromptTemplate.fromTemplate('Summarize this {text} in {style} style.');

// This throws an error — 'style' is missing
try {
  await template.format({ text: 'Some article content' });
} catch (error) {
  console.error(error.message);
  // "Missing value for input variable 'style'"
}

2. ChatPromptTemplate for Message-Based Prompts

Modern LLMs use message-based APIs (system, user, assistant messages). ChatPromptTemplate creates an array of messages with template variables — this is what you will use 90% of the time.

import { ChatPromptTemplate } from '@langchain/core/prompts';

// Method 1: fromMessages (most common)
const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a {role} who explains things in a {tone} tone.'],
  ['user', '{question}']
]);

const messages = await prompt.formatMessages({
  role: 'senior JavaScript developer',
  tone: 'friendly but precise',
  question: 'What is the event loop?'
});

console.log(messages);
// [
//   SystemMessage { content: "You are a senior JavaScript developer who explains things in a friendly but precise tone." },
//   HumanMessage { content: "What is the event loop?" }
// ]

Multi-turn conversation templates

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { HumanMessage, AIMessage } from '@langchain/core/messages';

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful coding assistant.'],
  new MessagesPlaceholder('history'),  // Slot for conversation history
  ['user', '{input}']
]);

const messages = await prompt.formatMessages({
  history: [
    new HumanMessage('What is a closure?'),
    new AIMessage('A closure is a function that captures variables from its enclosing scope...'),
    new HumanMessage('Can you give an example?'),
    new AIMessage('Sure! Here is a counter example using closures...')
  ],
  input: 'How does that relate to callbacks?'
});

// Result: system message + 4 history messages + 1 new user message = 6 messages

The MessagesPlaceholder is essential for any chatbot — it injects previous conversation turns into the prompt at the position you specify.

Template composition

You can compose smaller templates into larger ones:

import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate } from '@langchain/core/prompts';

const systemTemplate = SystemMessagePromptTemplate.fromTemplate(
  'You are an expert in {domain}. Always cite your sources.'
);

const userTemplate = HumanMessagePromptTemplate.fromTemplate(
  'Question: {question}\n\nContext:\n{context}'
);

const fullPrompt = ChatPromptTemplate.fromMessages([
  systemTemplate,
  userTemplate
]);

// Variables from both templates: {domain}, {question}, {context}

3. LLMChain (Legacy) vs Modern LCEL Pipe Syntax

The legacy way (deprecated, but you will see it in tutorials)

// LEGACY — do not use in new code
import { LLMChain } from 'langchain/chains';
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful assistant.'],
  ['user', '{input}']
]);

// Old way: create an LLMChain object
const chain = new LLMChain({ llm: model, prompt: prompt });
const result = await chain.call({ input: 'What is TypeScript?' });
console.log(result.text);

The modern way (LCEL pipe syntax)

// MODERN — use this
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';

const model = new ChatOpenAI({ modelName: 'gpt-4o' });
const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a helpful assistant.'],
  ['user', '{input}']
]);
const parser = new StringOutputParser();

// New way: pipe components together
const chain = prompt.pipe(model).pipe(parser);
const result = await chain.invoke({ input: 'What is TypeScript?' });
console.log(result); // Direct string output

Why the change?

AspectLLMChain (legacy)LCEL pipe (modern)
CompositionClass-based, rigid hierarchyPipe-based, any Runnable composes
StreamingRequires special handlingBuilt-in: chain.stream()
BatchingManual loopsBuilt-in: chain.batch([...])
Type safetyLoose (dictionary in, dictionary out)Stricter (input/output types flow through)
DebuggabilityOpaque chain executionEach step visible in LangSmith
FlexibilityFixed chain typesArbitrary combinations

4. Chaining Operations: Prompt -> Model -> Output Parser

The fundamental pattern in LangChain is a three-stage pipeline: prompt formats the input, model generates a response, and output parser extracts the useful content.

String output

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a concise technical writer.'],
  ['user', 'Define {term} in exactly one sentence.']
]);

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const parser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(parser);

const definition = await chain.invoke({ term: 'middleware' });
console.log(definition);
// "Middleware is a function that intercepts and processes requests between
//  the client and the server in a web application's request-response cycle."

JSON output

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { JsonOutputParser } from '@langchain/core/output_parsers';

const prompt = ChatPromptTemplate.fromMessages([
  ['system', `You are a data extraction assistant. Always respond with valid JSON.
Extract the following fields: name, age, occupation.`],
  ['user', '{text}']
]);

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const parser = new JsonOutputParser();

const chain = prompt.pipe(model).pipe(parser);

const data = await chain.invoke({
  text: 'My name is Sarah Chen, I am 34 years old and work as a software architect.'
});

console.log(data);
// { name: "Sarah Chen", age: 34, occupation: "software architect" }

Structured output with Zod

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { z } from 'zod';

const SentimentSchema = z.object({
  sentiment: z.enum(['positive', 'negative', 'neutral']),
  confidence: z.number().min(0).max(1),
  reasoning: z.string()
});

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const structuredModel = model.withStructuredOutput(SentimentSchema);

const prompt = ChatPromptTemplate.fromMessages([
  ['system', 'Analyze the sentiment of the user message.'],
  ['user', '{text}']
]);

const chain = prompt.pipe(structuredModel);

const result = await chain.invoke({
  text: 'The new feature is great but the documentation is confusing.'
});

console.log(result);
// {
//   sentiment: "neutral",
//   confidence: 0.75,
//   reasoning: "The message contains both positive sentiment (feature is great)
//               and negative sentiment (documentation is confusing)."
// }

5. Sequential Chains

Sequential chains pass the output of one chain as input to the next. In LCEL, you build these with RunnableSequence or by piping RunnableLambda transformations.

Basic sequential pattern

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence, RunnableLambda } from '@langchain/core/runnables';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0.7 });
const parser = new StringOutputParser();

// Step 1: Generate a summary
const summarizePrompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a summarization expert.'],
  ['user', 'Summarize this article in 3 bullet points:\n\n{article}']
]);

// Step 2: Translate the summary
const translatePrompt = ChatPromptTemplate.fromMessages([
  ['system', 'You are a professional translator.'],
  ['user', 'Translate the following English text to {language}:\n\n{text}']
]);

// Build the sequential chain
const chain = RunnableSequence.from([
  // Step 1: Summarize
  {
    text: summarizePrompt.pipe(model).pipe(parser),
    language: (input) => input.language  // Pass through the target language
  },
  // Step 2: Translate
  translatePrompt.pipe(model).pipe(parser)
]);

const result = await chain.invoke({
  article: 'LangChain is a framework for building LLM applications...(long article)',
  language: 'Spanish'
});

console.log(result);
// Spanish translation of the 3-bullet summary

Multi-step processing chain

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnablePassthrough, RunnableLambda } from '@langchain/core/runnables';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const parser = new StringOutputParser();

// Chain: Extract topics -> Generate quiz -> Format as markdown

const extractTopics = ChatPromptTemplate.fromMessages([
  ['system', 'Extract the 3 main topics from this text as a comma-separated list.'],
  ['user', '{text}']
]).pipe(model).pipe(parser);

const generateQuiz = ChatPromptTemplate.fromMessages([
  ['system', 'Create a 5-question multiple-choice quiz about these topics.'],
  ['user', 'Topics: {topics}']
]).pipe(model).pipe(parser);

const formatMarkdown = ChatPromptTemplate.fromMessages([
  ['system', 'Format the following quiz as clean markdown with numbered questions and lettered options.'],
  ['user', '{quiz}']
]).pipe(model).pipe(parser);

// Wire them together
const fullChain = RunnableSequence.from([
  { topics: extractTopics },
  { quiz: (input) => generateQuiz.invoke({ topics: input.topics }) },
  { quiz: (input) => input.quiz },
  (input) => formatMarkdown.invoke({ quiz: input.quiz })
]);

const quiz = await fullChain.invoke({
  text: 'JavaScript closures allow functions to access variables from their outer scope...'
});

console.log(quiz);

6. Practical Examples

Example 1: Summarization chain

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0.3 });
const parser = new StringOutputParser();

// Configurable summarization chain
const summarizeChain = ChatPromptTemplate.fromMessages([
  ['system', `You are an expert summarizer.
Rules:
- Use exactly {bulletCount} bullet points
- Each bullet must be one sentence
- Target audience: {audience}
- Focus on actionable insights`],
  ['user', 'Summarize this content:\n\n{content}']
]).pipe(model).pipe(parser);

// Use it with different configurations
const executiveSummary = await summarizeChain.invoke({
  content: longArticle,
  bulletCount: '3',
  audience: 'C-level executives',
});

const technicalSummary = await summarizeChain.invoke({
  content: longArticle,
  bulletCount: '5',
  audience: 'senior engineers',
});

// Batch processing: summarize multiple articles at once
const summaries = await summarizeChain.batch([
  { content: article1, bulletCount: '3', audience: 'general' },
  { content: article2, bulletCount: '3', audience: 'general' },
  { content: article3, bulletCount: '3', audience: 'general' }
], { maxConcurrency: 3 });

Example 2: Translation chain with quality check

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { JsonOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence } from '@langchain/core/runnables';

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0.3 });

// Step 1: Translate
const translateChain = ChatPromptTemplate.fromMessages([
  ['system', `You are a professional translator. 
Translate naturally — not word-for-word. Preserve tone and intent.`],
  ['user', 'Translate from {sourceLang} to {targetLang}:\n\n{text}']
]).pipe(model).pipe(new StringOutputParser());

// Step 2: Quality check
const qualityCheckChain = ChatPromptTemplate.fromMessages([
  ['system', `You are a translation quality reviewer. 
Compare the original and translation. Respond with JSON:
{{"score": 1-10, "issues": ["list of issues"], "suggestion": "improved translation or null"}}`],
  ['user', 'Original ({sourceLang}): {original}\n\nTranslation ({targetLang}): {translation}']
]).pipe(model).pipe(new JsonOutputParser());

// Combined chain
const translationPipeline = RunnableSequence.from([
  // Run translation
  async (input) => ({
    ...input,
    translation: await translateChain.invoke(input)
  }),
  // Run quality check
  async (input) => ({
    original: input.text,
    translation: input.translation,
    qualityCheck: await qualityCheckChain.invoke({
      sourceLang: input.sourceLang,
      targetLang: input.targetLang,
      original: input.text,
      translation: input.translation
    })
  })
]);

const result = await translationPipeline.invoke({
  text: 'The early bird catches the worm, but the second mouse gets the cheese.',
  sourceLang: 'English',
  targetLang: 'French'
});

console.log(result);
// {
//   original: "The early bird catches the worm...",
//   translation: "L'avenir appartient a ceux qui se levent tot...",
//   qualityCheck: {
//     score: 8,
//     issues: ["Idiom was adapted rather than literally translated — appropriate choice"],
//     suggestion: null
//   }
// }

Example 3: Code review chain

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { JsonOutputParser } from '@langchain/core/output_parsers';
import { z } from 'zod';

const ReviewSchema = z.object({
  overallScore: z.number().min(1).max(10),
  issues: z.array(z.object({
    severity: z.enum(['critical', 'warning', 'suggestion']),
    line: z.string(),
    description: z.string(),
    fix: z.string()
  })),
  summary: z.string()
});

const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const structuredModel = model.withStructuredOutput(ReviewSchema);

const codeReviewChain = ChatPromptTemplate.fromMessages([
  ['system', `You are a senior code reviewer. Review the code for:
- Security vulnerabilities
- Performance issues
- Best practice violations
- Error handling gaps
Be specific about line references and provide concrete fixes.`],
  ['user', 'Language: {language}\n\nCode:\n```\n{code}\n```']
]).pipe(structuredModel);

const review = await codeReviewChain.invoke({
  language: 'JavaScript',
  code: `
async function getUser(id) {
  const res = await fetch('/api/users/' + id);
  const data = await res.json();
  return data;
}
  `
});

console.log(JSON.stringify(review, null, 2));
// {
//   "overallScore": 4,
//   "issues": [
//     {
//       "severity": "critical",
//       "line": "const res = await fetch('/api/users/' + id);",
//       "description": "No input validation on 'id' parameter — potential injection",
//       "fix": "Validate that 'id' is a number/UUID before concatenating into URL"
//     },
//     {
//       "severity": "critical",
//       "line": "const data = await res.json();",
//       "description": "No check for response.ok — will parse error responses as data",
//       "fix": "Add: if (!res.ok) throw new Error(`HTTP ${res.status}`);"
//     },
//     ...
//   ],
//   "summary": "The function lacks error handling and input validation..."
// }

7. Key Takeaways

  1. PromptTemplate builds reusable prompts with {variables} — use ChatPromptTemplate.fromMessages() for message-based APIs (which is almost always).
  2. MessagesPlaceholder lets you inject conversation history into templates — essential for chatbots.
  3. The legacy LLMChain is deprecated — always use the LCEL pipe syntax: prompt.pipe(model).pipe(parser).
  4. The three-stage pipeline (prompt -> model -> parser) is the fundamental building block. Choose your parser: StringOutputParser for text, JsonOutputParser for JSON, .withStructuredOutput(zodSchema) for validated structured data.
  5. Sequential chains pass output from one step as input to the next — use RunnableSequence.from([...]) or chain multiple .pipe() calls with transformation functions.
  6. Batch processing is built-in — chain.batch([...], { maxConcurrency: N }) processes multiple inputs in parallel.

Explain-It Challenge

  1. Explain why ChatPromptTemplate.fromMessages() is preferred over PromptTemplate.fromTemplate() for modern LLM applications.
  2. Show how you would refactor a hardcoded prompt string into a ChatPromptTemplate and explain what you gain.
  3. A colleague builds a pipeline that summarizes text, then translates it, then formats it. Each step is a separate await call with no chain. Explain how to refactor this using RunnableSequence and why it is better.

Navigation: <- 4.17.a Introduction | 4.17.c — Tools and Memory ->