LCEL Configuration

Loading concept...

πŸ”§ LangChain LCEL Configuration: Your AI’s Remote Control

Imagine you have a super-smart robot helper. But what if you could give it different instructions on the flyβ€”change its speed, swap its brain, or add backup plans? That’s exactly what LCEL Configuration does for your AI!


🎬 The Story: Meet Your AI Control Room

Picture this: You’re the captain of a spaceship (your AI application). Your ship has many systemsβ€”engines, shields, sensors. Sometimes you need to:

  • Adjust settings without rebuilding the whole ship
  • Switch between engines depending on the mission
  • Have backup systems ready if the main one fails
  • Add checkpoints to monitor everything

LCEL Configuration is your control room. Let’s explore each button and lever!


πŸŽ›οΈ RunnableConfig Basics

What Is It?

Think of RunnableConfig like a settings card you hand to your robot. It tells the robot:

  • β€œHere’s your name tag” (metadata)
  • β€œStop after 30 seconds” (timeouts)
  • β€œTell me what you’re doing” (callbacks)

The Simple Picture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     RunnableConfig          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ πŸ“› tags: ["my-chain"]       β”‚
β”‚ πŸ“‹ metadata: {user: "Ali"}  β”‚
β”‚ ⏱️ max_concurrency: 5       β”‚
β”‚ πŸ“ž callbacks: [my_logger]   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Code Example

from langchain_core.runnables import (
    RunnableConfig
)

# Create a settings card
config = RunnableConfig(
    tags=["math-helper"],
    metadata={"student": "Emma"},
    max_concurrency=3
)

# Use it with any runnable
result = my_chain.invoke(
    "What is 2+2?",
    config=config
)

Why It Matters

Without RunnableConfig, you’d have to rebuild your entire chain just to change one setting. With it, you pass different settings each time you call your chain!


βš™οΈ Configurable Runnables

The Big Idea

What if you could leave some parts of your robot blank and fill them in later? That’s what configurable runnables do!

It’s like a pizza order form:

  • Size: _______ (small/medium/large)
  • Topping: _______ (pepperoni/mushroom)

You decide when you order, not when the menu was printed!

How It Works

graph TD A[πŸ• Pizza Template] --> B{Config Time!} B -->|size=large| C[πŸ• Large Pizza] B -->|size=small| D[πŸ• Small Pizza]

Code Example

from langchain_core.prompts import (
    ChatPromptTemplate
)

# Create a template with a HOLE
prompt = ChatPromptTemplate.from_template(
    "You are a {style} assistant. "
    "Answer: {question}"
)

# Make 'style' configurable
configurable_prompt = prompt.configurable_fields(
    style=ConfigurableField(
        id="assistant_style",
        name="Style",
        description="How should I talk?"
    )
)

# NOW fill in the blank!
result = configurable_prompt.invoke(
    {"question": "Why is sky blue?"},
    config={"configurable": {
        "assistant_style": "pirate"
    }}
)
# Result: "Arrr! The sky be blue..."

Key Point

Same chain, different behaviorβ€”just by changing the config!


πŸ”„ Configurable Alternatives

The Concept

Sometimes you don’t just want to change a valueβ€”you want to swap the entire piece!

Imagine a toy car with swappable wheels:

  • Racing wheels for speed
  • Off-road wheels for dirt
  • Snow wheels for ice

You pick which wheels at race time!

Visual Flow

graph TD A[πŸš— Car Chain] --> B{Which Engine?} B -->|config: gpt-4| C[🧠 GPT-4 Engine] B -->|config: claude| D[🧠 Claude Engine] B -->|config: local| E[🧠 Local LLM]

Code Example

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# Default engine
main_llm = ChatOpenAI(model="gpt-4")

# Make it swappable!
configurable_llm = main_llm.configurable_alternatives(
    ConfigurableField(id="llm_choice"),
    default_key="gpt4",
    claude=ChatAnthropic(model="claude-3"),
    fast=ChatOpenAI(model="gpt-3.5-turbo")
)

# Use GPT-4 (default)
result1 = configurable_llm.invoke("Hello")

# Switch to Claude - same code!
result2 = configurable_llm.invoke(
    "Hello",
    config={"configurable": {
        "llm_choice": "claude"
    }}
)

Power Move

Build one chain, deploy it, then let users or code pick which model runsβ€”without redeploying!


πŸ›‘οΈ RunnableWithFallbacks

The Safety Net

What happens when your robot’s main brain stops working? You need a backup plan!

RunnableWithFallbacks is like having:

  1. Main pilot
  2. Co-pilot (if main fails)
  3. Autopilot (if both fail)

How Fallbacks Work

graph TD A[πŸ“¨ Request] --> B[πŸ₯‡ Primary LLM] B -->|βœ… Success| C[πŸ“€ Response] B -->|❌ Error| D[πŸ₯ˆ Fallback 1] D -->|βœ… Success| C D -->|❌ Error| E[πŸ₯‰ Fallback 2] E -->|βœ… Success| C E -->|❌ Error| F[πŸ’₯ Final Error]

Code Example

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

# Primary model (expensive but smart)
primary = ChatOpenAI(model="gpt-4")

# Backup model (cheaper, still good)
backup = ChatOpenAI(model="gpt-3.5-turbo")

# Emergency model (different provider)
emergency = ChatAnthropic(model="claude-3")

# Create the safety chain
safe_llm = primary.with_fallbacks(
    [backup, emergency]
)

# If GPT-4 fails (rate limit, error),
# it tries GPT-3.5, then Claude!
result = safe_llm.invoke(
    "Explain quantum physics"
)

Real-World Win

Your app never crashes because one API had a hiccup. Users don’t even notice the switch!


🚦 Middleware

What Is Middleware?

Middleware is like a checkpoint your data passes through. Every request and response gets checked!

Think of airport security:

  • ✈️ Before flight: Check passport, scan bags
  • πŸ›¬ After flight: Customs inspection

Middleware does the same for your AI chain!

The Flow

graph LR A[πŸ“¨ Input] --> B[πŸ” Pre-Middleware] B --> C[πŸ€– Chain] C --> D[πŸ” Post-Middleware] D --> E[πŸ“€ Output]

What Can Middleware Do?

Middleware Type Example Use
Logging Record every request
Validation Check input format
Caching Return saved answers
Rate Limiting Slow down requests
Retry Logic Try again on failure

Code Example (Custom Middleware)

from langchain_core.runnables import (
    RunnableLambda
)

def log_middleware(func):
    """Wrap any function with logging"""
    def wrapper(input_data, config=None):
        print(f"πŸ“₯ Input: {input_data}")
        result = func(input_data, config)
        print(f"πŸ“€ Output: {result}")
        return result
    return RunnableLambda(wrapper)

# Your actual processing
def process(x, config=None):
    return x.upper()

# Wrap it!
logged_chain = log_middleware(process)

# Every call now logs!
logged_chain.invoke("hello world")
# πŸ“₯ Input: hello world
# πŸ“€ Output: HELLO WORLD

Built-in Middleware Power

LangChain provides middleware through callbacks:

from langchain.callbacks import (
    StdOutCallbackHandler
)

config = RunnableConfig(
    callbacks=[StdOutCallbackHandler()]
)

# Every step is now logged!
chain.invoke("Question?", config=config)

🎯 Putting It All Together

Here’s a production-ready chain using ALL concepts:

from langchain_openai import ChatOpenAI
from langchain_core.prompts import (
    ChatPromptTemplate
)

# 1. Create configurable prompt
prompt = ChatPromptTemplate.from_template(
    "You are a {tone} teacher. "
    "Explain: {topic}"
).configurable_fields(
    tone=ConfigurableField(
        id="tone",
        name="Teaching Tone"
    )
)

# 2. Create swappable LLM with fallbacks
llm = ChatOpenAI(
    model="gpt-4"
).configurable_alternatives(
    ConfigurableField(id="model"),
    default_key="gpt4",
    fast=ChatOpenAI(model="gpt-3.5-turbo")
).with_fallbacks(
    [ChatOpenAI(model="gpt-3.5-turbo")]
)

# 3. Build the chain
chain = prompt | llm

# 4. Use with full config
result = chain.invoke(
    {"topic": "black holes"},
    config={
        "configurable": {
            "tone": "excited",
            "model": "fast"
        },
        "tags": ["astronomy"],
        "metadata": {"user_id": "123"}
    }
)

🌟 Quick Reference

Concept What It Does When to Use
RunnableConfig Pass settings at runtime Alwaysβ€”for tags, metadata, callbacks
Configurable Fields Fill in blanks later When same chain needs different values
Configurable Alternatives Swap entire components When you need different models/prompts
Fallbacks Backup plans for failures Production apps that must stay up
Middleware Checkpoints for data Logging, validation, caching

πŸš€ You Did It!

You now control your AI like a master pilot:

  • βœ… Adjust settings without rebuilding
  • βœ… Swap components on the fly
  • βœ… Never crash with fallbacks
  • βœ… Monitor everything with middleware

Your AI isn’t just smartβ€”it’s flexible, reliable, and production-ready!

Now go build something amazing. πŸŽ‰

Loading story...

No Story Available

This concept doesn't have a story yet.

Story Preview

Story - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

Interactive Preview

Interactive - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Interactive Content

This concept doesn't have interactive content yet.

Cheatsheet Preview

Cheatsheet - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Cheatsheet Available

This concept doesn't have a cheatsheet yet.

Quiz Preview

Quiz - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Quiz Available

This concept doesn't have a quiz yet.