Why Use Chains?
By composing modular stages—such as retrievers, prompts, LLMs, and output parsers—you can:- Inject relevant context before calling an LLM
- Validate or transform model outputs
- Orchestrate multi-step processes, from data fetching to API calls
Core Chain Components
| Component | Purpose | LangChain Class Example |
|---|---|---|
| Retriever | Fetches contextual data (e.g., from a vector store) | VectorDBRetriever |
| Prompt | Defines the template for LLM input | PromptTemplate |
| LLM | Executes the prompt and generates the raw completion | OpenAI, AzureOpenAI |
| Output Parser | Parses or validates LLM outputs (e.g., JSON, regex) | PydanticOutputParser |
| Function Call | Invokes external APIs or Python functions as part of flow | StructuredToolChain |
You can easily insert an output parser to enforce structure on your LLM’s response (for example, ensure valid JSON).
Basic Sequential Chain Example
The following example shows how to build a simple translation chain:Chain Execution Modes
Chains in LangChain support two primary execution modes:| Chain Type | Description | Use Case |
|---|---|---|
| SequentialChain | Executes each component step-by-step in a defined order | Prompt → LLM → Parser |
| Router/Parallel | Dispatches inputs to multiple branches in parallel, then merges output | Calling different APIs or data sources |
Parallel execution can increase throughput but may also raise costs on API calls. Monitor usage carefully.
Next Steps
Chains are highly extensible. In upcoming sections, we’ll cover:- Customizing chains with callbacks and middleware
- Building nested or recursive chains
- Integrating chains with external data stores and tools