Introduction
LangChain is a powerful toolkit for building applications with Large Language Models (LLMs). Whether you’re creating chatbots, retrieval-augmented generation (RAG) systems, or AI agents, LangChain simplifies the process by providing modular components that work together seamlessly.
In this guide, we’ll break down:
- What LangChain is (Library vs. Framework)
- Key Components (Models, Prompts, Chains, Retrieval, Agents)
- How to Build a Simple LLM Pipeline
- Advanced Concepts (RAG, LCEL, LangSmith, LangServe)
1. Is LangChain a Library or Framework?
LangChain is both—it provides pre-built tools (like a library) but also enforces structured workflows (like a framework).
- Library: You can use individual components (e.g.,
ChatOpenAI,PromptTemplate) independently. - Framework: Encourages best practices with LCEL (LangChain Expression Language) for chaining operations.
Example:
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
# Step 1: Define a prompt
prompt = ChatPromptTemplate.from_template("Explain {topic} like I'm 5.")
# Step 2: Initialize a model
llm = ChatOpenAI(model="gpt-3.5-turbo")
# Step 3: Chain them together
chain = prompt | llm # LCEL syntax
response = chain.invoke({"topic": "black holes"})
print(response.content)
Output:
“Black holes are like cosmic vacuum cleaners in space. They suck in everything, even light!”
2. Core Building Blocks of LangChain
A. Models
LangChain supports:
- LLMs (Text → Text):
OpenAI,Anthropic - Chat Models (Messages → Messages):
ChatOpenAI - Multimodal Models (Text + Images):
GPT-4V
Example:
from langchain.llms import OpenAI
llm = OpenAI(model="text-davinci-003")
print(llm("What is LangChain?"))
B. Prompts & Templates
- String Prompts: Basic text placeholders.
- Chat Prompts: Structured messages (System, Human, AI).
Example:
from langchain.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
system_msg = "You are a helpful assistant."
human_msg = "Explain {topic} in 10 words."
prompt = ChatPromptTemplate.from_messages([
("system", system_msg),
("human", human_msg)
])
chain = prompt | ChatOpenAI()
print(chain.invoke({"topic": "AI"}).content)
Output:
“AI simulates human intelligence using computers.”
C. Chains (Sequential Workflows)
Chains combine models, prompts, and tools.
Types:
- Simple Chains (1-step, like
LLMChain). - Sequential Chains (Multi-step, where output of one feeds into the next).
Example (Simple Chain):
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run(topic="blockchain"))
Example (Sequential Chain):
from langchain.chains import SimpleSequentialChain
# Chain 1: Summarize text
chain1 = LLMChain(llm=llm, prompt=ChatPromptTemplate.from_template("Summarize: {text}"))
# Chain 2: Translate summary
chain2 = LLMChain(llm=llm, prompt=ChatPromptTemplate.from_template("Translate to French: {text}"))
combined_chain = SimpleSequentialChain(chains=[chain1, chain2])
print(combined_chain.run("LangChain helps build LLM apps."))
Output:
“LangChain aide à construire des applications LLM.”
D. Retrieval (RAG – Retrieval-Augmented Generation)
RAG fetches external data to improve LLM responses.
Key Components:
- Embedding Model (e.g.,
OpenAIEmbeddings). - Vector Store (e.g., FAISS, Pinecone).
- Retriever (Fetches relevant documents).
Example:
from langchain.document_loaders import WebBaseLoader
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
# Step 1: Load documents
loader = WebBaseLoader("https://en.wikipedia.org/wiki/LangChain")
docs = loader.load()
# Step 2: Store in vector DB
db = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = db.as_retriever()
# Step 3: Use in a chain
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(),
chain_type="stuff",
retriever=retriever
)
print(qa_chain.run("What is LangChain?"))
3. Advanced Concepts
A. LangChain Expression Language (LCEL)
A Pythonic way to compose chains.
Example:
from langchain.schema import StrOutputParser
chain = (
ChatPromptTemplate.from_template("Explain {topic}.")
| ChatOpenAI()
| StrOutputParser()
)
print(chain.invoke({"topic": "quantum computing"}))
B. LangSmith (Debugging & Monitoring)
- Traces LLM calls.
- Logs errors & latency.
How to Use:
- Set
LANGCHAIN_TRACING_V2=true. - View traces at LangSmith Dashboard.
C. LangServe (Deploy Chains as APIs)
from fastapi import FastAPI
from langserve import add_routes
app = FastAPI()
add_routes(app, chain, path="/explain")
# Run: `uvicorn app:app --reload`
4. Key Takeaways
✅ LangChain = Library + Framework for LLM apps.
✅ Core Components: Models, Prompts, Chains, Retrieval, Agents.
✅ RAG = Fetch data → Augment LLM responses.
✅ LCEL = Clean syntax for chaining.
✅ LangSmith = Debugging, LangServe = Deployment.
Next Steps
- Try modifying the examples.
- Experiment with LangGraph for cyclic workflows (e.g., chatbots with memory).
- Deploy a chain using LangServe.
