π§ LangChain LCEL Configuration: Your AIβs Remote Control
Imagine you have a super-smart robot helper. But what if you could give it different instructions on the flyβchange its speed, swap its brain, or add backup plans? Thatβs exactly what LCEL Configuration does for your AI!
π¬ The Story: Meet Your AI Control Room
Picture this: Youβre the captain of a spaceship (your AI application). Your ship has many systemsβengines, shields, sensors. Sometimes you need to:
- Adjust settings without rebuilding the whole ship
- Switch between engines depending on the mission
- Have backup systems ready if the main one fails
- Add checkpoints to monitor everything
LCEL Configuration is your control room. Letβs explore each button and lever!
ποΈ RunnableConfig Basics
What Is It?
Think of RunnableConfig like a settings card you hand to your robot. It tells the robot:
- βHereβs your name tagβ (metadata)
- βStop after 30 secondsβ (timeouts)
- βTell me what youβre doingβ (callbacks)
The Simple Picture
βββββββββββββββββββββββββββββββ
β RunnableConfig β
βββββββββββββββββββββββββββββββ€
β π tags: ["my-chain"] β
β π metadata: {user: "Ali"} β
β β±οΈ max_concurrency: 5 β
β π callbacks: [my_logger] β
βββββββββββββββββββββββββββββββ
Code Example
from langchain_core.runnables import (
RunnableConfig
)
# Create a settings card
config = RunnableConfig(
tags=["math-helper"],
metadata={"student": "Emma"},
max_concurrency=3
)
# Use it with any runnable
result = my_chain.invoke(
"What is 2+2?",
config=config
)
Why It Matters
Without RunnableConfig, youβd have to rebuild your entire chain just to change one setting. With it, you pass different settings each time you call your chain!
βοΈ Configurable Runnables
The Big Idea
What if you could leave some parts of your robot blank and fill them in later? Thatβs what configurable runnables do!
Itβs like a pizza order form:
- Size: _______ (small/medium/large)
- Topping: _______ (pepperoni/mushroom)
You decide when you order, not when the menu was printed!
How It Works
graph TD A[π Pizza Template] --> B{Config Time!} B -->|size=large| C[π Large Pizza] B -->|size=small| D[π Small Pizza]
Code Example
from langchain_core.prompts import (
ChatPromptTemplate
)
# Create a template with a HOLE
prompt = ChatPromptTemplate.from_template(
"You are a {style} assistant. "
"Answer: {question}"
)
# Make 'style' configurable
configurable_prompt = prompt.configurable_fields(
style=ConfigurableField(
id="assistant_style",
name="Style",
description="How should I talk?"
)
)
# NOW fill in the blank!
result = configurable_prompt.invoke(
{"question": "Why is sky blue?"},
config={"configurable": {
"assistant_style": "pirate"
}}
)
# Result: "Arrr! The sky be blue..."
Key Point
Same chain, different behaviorβjust by changing the config!
π Configurable Alternatives
The Concept
Sometimes you donβt just want to change a valueβyou want to swap the entire piece!
Imagine a toy car with swappable wheels:
- Racing wheels for speed
- Off-road wheels for dirt
- Snow wheels for ice
You pick which wheels at race time!
Visual Flow
graph TD A[π Car Chain] --> B{Which Engine?} B -->|config: gpt-4| C[π§ GPT-4 Engine] B -->|config: claude| D[π§ Claude Engine] B -->|config: local| E[π§ Local LLM]
Code Example
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
# Default engine
main_llm = ChatOpenAI(model="gpt-4")
# Make it swappable!
configurable_llm = main_llm.configurable_alternatives(
ConfigurableField(id="llm_choice"),
default_key="gpt4",
claude=ChatAnthropic(model="claude-3"),
fast=ChatOpenAI(model="gpt-3.5-turbo")
)
# Use GPT-4 (default)
result1 = configurable_llm.invoke("Hello")
# Switch to Claude - same code!
result2 = configurable_llm.invoke(
"Hello",
config={"configurable": {
"llm_choice": "claude"
}}
)
Power Move
Build one chain, deploy it, then let users or code pick which model runsβwithout redeploying!
π‘οΈ RunnableWithFallbacks
The Safety Net
What happens when your robotβs main brain stops working? You need a backup plan!
RunnableWithFallbacks is like having:
- Main pilot
- Co-pilot (if main fails)
- Autopilot (if both fail)
How Fallbacks Work
graph TD A[π¨ Request] --> B[π₯ Primary LLM] B -->|β Success| C[π€ Response] B -->|β Error| D[π₯ Fallback 1] D -->|β Success| C D -->|β Error| E[π₯ Fallback 2] E -->|β Success| C E -->|β Error| F[π₯ Final Error]
Code Example
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
# Primary model (expensive but smart)
primary = ChatOpenAI(model="gpt-4")
# Backup model (cheaper, still good)
backup = ChatOpenAI(model="gpt-3.5-turbo")
# Emergency model (different provider)
emergency = ChatAnthropic(model="claude-3")
# Create the safety chain
safe_llm = primary.with_fallbacks(
[backup, emergency]
)
# If GPT-4 fails (rate limit, error),
# it tries GPT-3.5, then Claude!
result = safe_llm.invoke(
"Explain quantum physics"
)
Real-World Win
Your app never crashes because one API had a hiccup. Users donβt even notice the switch!
π¦ Middleware
What Is Middleware?
Middleware is like a checkpoint your data passes through. Every request and response gets checked!
Think of airport security:
- βοΈ Before flight: Check passport, scan bags
- π¬ After flight: Customs inspection
Middleware does the same for your AI chain!
The Flow
graph LR A[π¨ Input] --> B[π Pre-Middleware] B --> C[π€ Chain] C --> D[π Post-Middleware] D --> E[π€ Output]
What Can Middleware Do?
| Middleware Type | Example Use |
|---|---|
| Logging | Record every request |
| Validation | Check input format |
| Caching | Return saved answers |
| Rate Limiting | Slow down requests |
| Retry Logic | Try again on failure |
Code Example (Custom Middleware)
from langchain_core.runnables import (
RunnableLambda
)
def log_middleware(func):
"""Wrap any function with logging"""
def wrapper(input_data, config=None):
print(f"π₯ Input: {input_data}")
result = func(input_data, config)
print(f"π€ Output: {result}")
return result
return RunnableLambda(wrapper)
# Your actual processing
def process(x, config=None):
return x.upper()
# Wrap it!
logged_chain = log_middleware(process)
# Every call now logs!
logged_chain.invoke("hello world")
# π₯ Input: hello world
# π€ Output: HELLO WORLD
Built-in Middleware Power
LangChain provides middleware through callbacks:
from langchain.callbacks import (
StdOutCallbackHandler
)
config = RunnableConfig(
callbacks=[StdOutCallbackHandler()]
)
# Every step is now logged!
chain.invoke("Question?", config=config)
π― Putting It All Together
Hereβs a production-ready chain using ALL concepts:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import (
ChatPromptTemplate
)
# 1. Create configurable prompt
prompt = ChatPromptTemplate.from_template(
"You are a {tone} teacher. "
"Explain: {topic}"
).configurable_fields(
tone=ConfigurableField(
id="tone",
name="Teaching Tone"
)
)
# 2. Create swappable LLM with fallbacks
llm = ChatOpenAI(
model="gpt-4"
).configurable_alternatives(
ConfigurableField(id="model"),
default_key="gpt4",
fast=ChatOpenAI(model="gpt-3.5-turbo")
).with_fallbacks(
[ChatOpenAI(model="gpt-3.5-turbo")]
)
# 3. Build the chain
chain = prompt | llm
# 4. Use with full config
result = chain.invoke(
{"topic": "black holes"},
config={
"configurable": {
"tone": "excited",
"model": "fast"
},
"tags": ["astronomy"],
"metadata": {"user_id": "123"}
}
)
π Quick Reference
| Concept | What It Does | When to Use |
|---|---|---|
| RunnableConfig | Pass settings at runtime | Alwaysβfor tags, metadata, callbacks |
| Configurable Fields | Fill in blanks later | When same chain needs different values |
| Configurable Alternatives | Swap entire components | When you need different models/prompts |
| Fallbacks | Backup plans for failures | Production apps that must stay up |
| Middleware | Checkpoints for data | Logging, validation, caching |
π You Did It!
You now control your AI like a master pilot:
- β Adjust settings without rebuilding
- β Swap components on the fly
- β Never crash with fallbacks
- β Monitor everything with middleware
Your AI isnβt just smartβitβs flexible, reliable, and production-ready!
Now go build something amazing. π