Episode 4 — Generative AI Engineering / 4.17 — LangChain Practical
4.17.a — Introduction to LangChain
In one sentence: LangChain is an open-source framework that provides composable building blocks — prompt templates, model wrappers, output parsers, tools, memory, and agents — so you can build complex LLM applications by snapping components together instead of writing glue code from scratch.
Navigation: <- 4.17 Overview | 4.17.b — Chains and Prompt Templates ->
1. What Is LangChain?
LangChain is a framework for building applications powered by large language models. It was created by Harrison Chase in late 2022 and has become one of the most widely adopted LLM frameworks in the ecosystem.
At its core, LangChain provides:
- Standardized interfaces for interacting with different LLM providers (OpenAI, Anthropic, Google, local models)
- Composable components that can be connected together like pipes
- Built-in abstractions for common patterns: retrieval, memory, tool use, agents
- A consistent API so you can swap providers without rewriting your application
// Without LangChain: manually wiring everything
import OpenAI from 'openai';
const client = new OpenAI();
const systemPrompt = `You are a helpful assistant. Answer based on the context provided.`;
async function askQuestion(question, context) {
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: `Context: ${context}\n\nQuestion: ${question}` }
],
temperature: 0
});
return response.choices[0].message.content;
}
// With LangChain: composable components
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant. Answer based on the context provided.'],
['user', 'Context: {context}\n\nQuestion: {question}']
]);
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const answer = await chain.invoke({
context: 'The refund policy allows returns within 30 days.',
question: 'What is the refund policy?'
});
The LangChain version looks slightly longer for this simple case — and that is the honest tradeoff. The real benefit appears when you need to add memory, tools, streaming, fallbacks, or swap models. The composable structure means you modify one component without touching the rest.
2. Core Philosophy: Composable Components
LangChain is built around the idea that every LLM application is a pipeline of components. Each component has a standardized interface: it takes input, transforms it, and produces output. Components can be connected in sequence, in parallel, or in complex graphs.
The building blocks
| Component | What It Does | Example |
|---|---|---|
| Prompt Template | Formats user input into a prompt | ChatPromptTemplate.fromMessages(...) |
| Model | Calls an LLM API | ChatOpenAI, ChatAnthropic |
| Output Parser | Extracts structured data from model output | StringOutputParser, JsonOutputParser |
| Retriever | Fetches relevant documents from a vector store | vectorStore.asRetriever() |
| Tool | Gives the model access to external capabilities | DynamicTool, TavilySearchResults |
| Memory | Persists conversation history across turns | BufferMemory, ConversationSummaryMemory |
| Agent | Uses a model to decide which tools to call | createOpenAIToolsAgent |
The pipe pattern
Every component in LangChain implements the Runnable interface, which means it has .invoke(), .stream(), .batch(), and .pipe() methods. This is what makes everything composable:
// Each component is a Runnable
// Runnables can be piped together to form a chain
const chain = promptTemplate // Runnable: takes {variables} → produces messages
.pipe(model) // Runnable: takes messages → produces AIMessage
.pipe(outputParser); // Runnable: takes AIMessage → produces string
// The chain itself is also a Runnable
const result = await chain.invoke({ topic: 'JavaScript closures' });
const stream = await chain.stream({ topic: 'JavaScript closures' });
const results = await chain.batch([
{ topic: 'closures' },
{ topic: 'promises' },
{ topic: 'generators' }
]);
3. Installation
LangChain is modular — you install the core package and then the provider-specific packages you need.
Basic installation
# Core LangChain package
npm install langchain
# Provider packages (install the ones you need)
npm install @langchain/openai # For OpenAI models (GPT-4o, etc.)
npm install @langchain/anthropic # For Anthropic models (Claude, etc.)
npm install @langchain/google-genai # For Google models (Gemini, etc.)
npm install @langchain/community # Community integrations (tools, vector stores, etc.)
# Common companion packages
npm install @langchain/core # Core interfaces (usually installed as a dependency)
Typical starter setup
npm install langchain @langchain/openai @langchain/anthropic
Environment variables
# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Optional: LangSmith for tracing and observability
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=ls__...
Verify installation
import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
modelName: 'gpt-4o',
temperature: 0
});
const response = await model.invoke('Say hello in one word.');
console.log(response.content); // "Hello"
4. LangChain vs Building from Scratch (Tradeoffs)
This is a critical decision every team faces. Here is an honest comparison:
When LangChain helps
| Benefit | Explanation |
|---|---|
| Rapid prototyping | Build a working RAG chatbot in 50 lines instead of 300 |
| Provider abstraction | Swap from OpenAI to Anthropic by changing one line |
| Built-in patterns | Memory, retrieval, agents, tool use — battle-tested implementations |
| Streaming support | LCEL makes streaming trivial across the entire chain |
| Ecosystem | Hundreds of integrations (vector stores, tools, document loaders) |
| Observability | LangSmith integration for tracing, debugging, and evaluation |
When building from scratch is better
| Concern | Explanation |
|---|---|
| Abstraction overhead | LangChain adds layers between you and the API — harder to debug when things go wrong |
| Rapid changes | LangChain's API has changed significantly over time (legacy chains -> LCEL -> v0.2 -> v0.3). Keeping up is a cost |
| Bundle size | LangChain pulls in many dependencies; for a simple API wrapper, this is overkill |
| Learning curve | Understanding Runnables, LCEL, and the component model takes time |
| Vendor lock-in | Your code becomes dependent on LangChain's abstractions and lifecycle |
| Simple use cases | If you just call one API with one prompt, LangChain adds complexity without benefit |
The decision framework
Do you need ONE simple API call?
YES -> Use the provider SDK directly (openai, @anthropic-ai/sdk)
NO -> Continue...
Do you need multiple components (retrieval + memory + tools)?
YES -> LangChain saves significant development time
NO -> Continue...
Do you need to swap providers or test multiple models?
YES -> LangChain's abstraction layer pays for itself
NO -> Continue...
Is your team already using LangChain?
YES -> Stay consistent
NO -> Evaluate: will the learning curve pay off for your use case?
5. The LangChain Ecosystem
LangChain is not a single library — it is an ecosystem of tools that work together.
langchain (core framework)
The main framework. Contains prompt templates, output parsers, chains, agents, memory modules, and the LCEL runtime.
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence } from '@langchain/core/runnables';
@langchain/openai, @langchain/anthropic, etc. (providers)
Provider-specific packages that implement LangChain's model interfaces for each LLM provider. Each package wraps the provider's API behind LangChain's standardized BaseChatModel interface.
import { ChatOpenAI } from '@langchain/openai';
import { ChatAnthropic } from '@langchain/anthropic';
// Same interface, different providers
const openai = new ChatOpenAI({ modelName: 'gpt-4o' });
const claude = new ChatAnthropic({ modelName: 'claude-sonnet-4-20250514' });
// Both work identically in a chain
const chain = prompt.pipe(openai).pipe(parser);
// Swap to: prompt.pipe(claude).pipe(parser);
LangSmith (observability and evaluation)
A platform (not a library) for monitoring, debugging, and evaluating LLM applications. It records every step of your chain execution, letting you see exactly what was sent to the model, what came back, how long each step took, and how much it cost.
What LangSmith shows you:
- Full trace of every chain execution
- Input/output at each step (prompt template -> model -> parser)
- Latency breakdown per step
- Token counts and cost estimates
- Error details when a step fails
- Dataset management for evaluation
- A/B testing different prompts or models
Enable it by setting environment variables:
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=ls__...
LANGCHAIN_PROJECT=my-project
Once enabled, every LangChain call is automatically traced — no code changes needed.
LangGraph (complex agent workflows)
LangGraph is a framework for building stateful, multi-step agent workflows as directed graphs. While LangChain's agents handle simple tool-calling loops, LangGraph handles complex workflows with branching, cycles, human-in-the-loop, and persistent state.
LangChain Agent: User -> [Think -> Act -> Observe] (loop) -> Answer
LangGraph Agent: User -> [Node A -> Branch -> Node B or C -> Merge -> Node D] (graph)
Use LangChain agents when:
- Simple tool-calling loop is sufficient
- Single agent with a few tools
Use LangGraph when:
- Multi-agent collaboration
- Complex branching logic
- Human-in-the-loop approvals
- Persistent state across sessions
- Workflows that look like flowcharts, not loops
Ecosystem summary
| Package | Purpose | When to Use |
|---|---|---|
langchain | Core framework | Always (it's the base) |
@langchain/openai | OpenAI models | When using GPT-4o, GPT-4, etc. |
@langchain/anthropic | Anthropic models | When using Claude |
@langchain/community | Community integrations | Tools, vector stores, document loaders |
@langchain/core | Core interfaces | Installed as dependency; import Runnables, prompts, parsers |
| LangSmith | Observability platform | Production monitoring, debugging, evaluation |
| LangGraph | Complex agent workflows | Multi-agent, branching, stateful workflows |
6. When to Use LangChain vs Roll Your Own
Here is a practical guide based on real-world project patterns:
Use LangChain
- Prototyping: need a working demo in hours, not days
- RAG applications: document loaders + splitters + vector stores + retrieval chains
- Agent systems: tool-calling, reasoning loops, multi-step tasks
- Multi-provider support: testing OpenAI vs Anthropic vs Gemini
- Team projects: shared abstractions reduce miscommunication
- LangSmith users: tracing and evaluation are deeply integrated
Roll your own
- Single-purpose API wrapper: one model, one prompt, one use case
- Performance-critical paths: every millisecond matters, no abstraction overhead
- Full control needed: custom retry logic, custom streaming, custom error handling
- Minimal dependencies: serverless functions with strict bundle size limits
- Learning purposes: understanding what LangChain abstracts by building it yourself
- Stable APIs: your prompts and models rarely change
Hybrid approach (common in production)
Many production systems use LangChain for the parts where it shines (RAG, agents, observability) and raw SDK calls for the parts where they need full control (latency-critical paths, simple completions).
// Use LangChain for complex RAG pipeline
import { ChatOpenAI } from '@langchain/openai';
import { createRetrievalChain } from 'langchain/chains/retrieval';
// ... full LangChain RAG setup
// Use raw SDK for simple, latency-critical completions
import OpenAI from 'openai';
const client = new OpenAI();
async function quickClassify(text) {
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Classify: ${text}` }],
temperature: 0,
max_tokens: 10
});
return response.choices[0].message.content;
}
7. LangChain Version History and Stability
Understanding the version history helps you navigate tutorials and documentation, since many online resources reference outdated APIs.
| Era | API Style | Status |
|---|---|---|
| v0.0.x (2023) | LLMChain, SequentialChain, ConversationChain | Deprecated — legacy chain classes |
| v0.1.x (2023-2024) | LCEL introduced, legacy chains still available | Transitional — LCEL recommended |
| v0.2.x (2024) | LCEL is primary, legacy chains deprecated | Current stable |
| v0.3.x (2024-2025) | Modular packages, cleaner imports | Latest |
Key migration: The biggest change was from legacy chains (class-based: new LLMChain(...)) to LCEL (pipe-based: prompt.pipe(model).pipe(parser)). This guide teaches the modern LCEL approach throughout. When you encounter LLMChain or SequentialChain in tutorials, know that these are legacy patterns.
8. Your First LangChain Application
Let us build a simple but complete application that demonstrates the core concepts:
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
// Step 1: Create a model wrapper
const model = new ChatOpenAI({
modelName: 'gpt-4o',
temperature: 0.7
});
// Step 2: Create a prompt template
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are an expert {role}. Explain topics clearly with examples.'],
['user', 'Explain {topic} in {style} style. Keep it under 3 paragraphs.']
]);
// Step 3: Create an output parser
const parser = new StringOutputParser();
// Step 4: Pipe them together into a chain
const chain = prompt.pipe(model).pipe(parser);
// Step 5: Invoke the chain
const result = await chain.invoke({
role: 'JavaScript instructor',
topic: 'closures',
style: 'beginner-friendly'
});
console.log(result);
// "A closure is a function that remembers the variables from the place
// where it was created, even after that place has finished executing..."
// Step 6: Stream the same chain
const stream = await chain.stream({
role: 'Python instructor',
topic: 'decorators',
style: 'concise'
});
for await (const chunk of stream) {
process.stdout.write(chunk); // Prints token-by-token
}
What just happened
- ChatOpenAI wraps the OpenAI API behind LangChain's model interface
- ChatPromptTemplate creates a reusable prompt with
{variables}that get filled at runtime - StringOutputParser extracts the text content from the model's response object
- .pipe() connects them into a chain that flows: variables -> formatted prompt -> model -> string
- .invoke() runs the chain synchronously; .stream() runs it with streaming output
Every component is independently testable, independently replaceable, and independently reusable. That is the LangChain philosophy.
9. Key Takeaways
- LangChain is a framework for composable LLM applications — it provides standardized building blocks (prompts, models, parsers, tools, memory, agents) that snap together via the pipe operator.
- Installation is modular — install
langchainplus provider-specific packages (@langchain/openai,@langchain/anthropic) for the models you use. - The ecosystem includes three pillars —
langchain(framework), LangSmith (observability), LangGraph (complex workflows). - LangChain is not always the right choice — for simple single-API calls, the overhead is not justified. For complex multi-component systems, it saves significant development time.
- LCEL is the modern API — legacy chain classes (
LLMChain,SequentialChain) are deprecated. All new code should use the pipe-based LCEL syntax. - Every component is a Runnable — meaning it supports
.invoke(),.stream(),.batch(), and.pipe()out of the box.
Explain-It Challenge
- A teammate asks "why should we add LangChain when we already have working OpenAI API calls?" — give three specific scenarios where LangChain adds value and one where it does not.
- Explain the difference between
langchain,@langchain/openai, and@langchain/coreto a developer who has never used the framework. - Your startup needs to build a chatbot that retrieves documents, remembers conversation history, and can call external APIs. Argue for or against using LangChain, citing specific tradeoffs.
Navigation: <- 4.17 Overview | 4.17.b — Chains and Prompt Templates ->