Day 01 Foundations

LangChain Basics — Chains, Prompts, Output Parsers

Install LangChain, build your first chain with LCEL, write prompt templates, and use output parsers. The foundations of every LangChain application.

~1 hour Hands-on Precision AI Academy

Today's Objective

A content generation pipeline — a chain that takes a topic, generates a blog post title, then expands it into an outline. Two chained LLM calls, properly composed with LCEL.


  
code
# Core LangChain + OpenAI integration
pip install langchain langchain-openai

# Or for Anthropic/Claude
pip install langchain langchain-anthropic

# Set your API key
export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."

Which model to use: This course uses OpenAI's gpt-4o-mini and Anthropic's Claude Haiku interchangeably. They're both fast and cheap. The LangChain code is identical except the import and model name.

01
Core Concept

LCEL — LangChain Expression Language

LCEL is the modern way to compose LangChain components. It uses the pipe operator | to chain components together. Each component's output feeds into the next.

first_chain.py
python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# 1. The model
model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)

# 2. A prompt template
prompt = ChatPromptTemplate.from_template(
    "Write a compelling blog post title about: {topic}"
)

# 3. An output parser (extracts text from the response)
parser = StrOutputParser()

# 4. Chain them together with LCEL pipe operator
chain = prompt | model | parser

# 5. Invoke the chain
result = chain.invoke({"topic": "building AI apps with LangChain"})
print(result)

The | operator connects components. The chain reads left to right: prompt formats the input → model generates a response → parser extracts the string.

01
Prompt Templates

Prompt Templates — Dynamic, Reusable Prompts

Hard-coding prompts as strings doesn't scale. Prompt templates let you define the structure once and fill in variables at runtime.

python
python
from langchain_core.prompts import ChatPromptTemplate

# System + human message template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a {role}. Be {tone}."),
    ("human", "{question}")
])

# The template needs all variables when invoked
chain = prompt | model | StrOutputParser()

result = chain.invoke({
    "role": "senior software engineer",
    "tone": "direct and concise",
    "question": "What are the biggest mistakes in API design?"
})
print(result)
01
Chaining

Chaining Multiple LLM Calls

The power of LCEL is composing complex multi-step workflows. Here's the content pipeline — title generation feeds into outline generation:

content_pipeline.py
python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# Chain 1: topic → title
title_prompt = ChatPromptTemplate.from_template(
    "Write ONE compelling blog post title about: {topic}"
)
title_chain = title_prompt | model | parser

# Chain 2: title → outline
outline_prompt = ChatPromptTemplate.from_template(
    "Create a 5-section blog post outline for this title: {title}"
)
outline_chain = outline_prompt | model | parser

# Combine: topic → title → outline
full_pipeline = (
    {"title": title_chain}
    | outline_chain
)

result = full_pipeline.invoke({"topic": "LangChain for production AI"})
print(result)

What's RunnablePassthrough? It passes the input through unchanged — useful when you need to forward original values alongside transformed ones in a chain.

20%
Day 1 Done

Tomorrow: memory and conversation

Day 2 shows how to build chatbots that remember context across multiple turns.

Day 2: Memory and Conversation

Supporting References & Reading

Go deeper with these external resources.

Docs
LangChain Basics — Chains, Prompts, Output Parsers Official documentation for langchain.
GitHub
LangChain Basics — Chains, Prompts, Output Parsers Open source examples and projects for LangChain Basics — Chains, Prompts, Output Parsers
MDN
MDN Web Docs Comprehensive web technology reference

Day 1 Checkpoint

Before moving on, confirm understanding of these key concepts:

Continue To Day 2
Day 2 of the LangChain in 5 Days course