This article explains how to enable debug logging in LangChain applications for troubleshooting and performance tuning.
Inspecting the internal execution of your LangChain workflows is essential for troubleshooting and tuning performance. By default, LangChain hides most runtime details. Enabling debug logging reveals every step—from chain start to LLM invocation and chain end—so you can understand exactly what’s happening under the hood.
Let’s begin with a simple LLMChain that prompts an OpenAI model to explain a concept in the style of a teacher.
Copy
Ask AI
from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chains import LLMChain# 1. Define a prompt template with system and human messagesprompt = ChatPromptTemplate.from_messages([ ("system", "You are a {subject} teacher"), ("human", "Tell me about {concept}")])# 2. Initialize the LLMllm = ChatOpenAI(model_name="gpt-4")# 3. Create and run the chainchain = LLMChain(llm=llm, prompt=prompt)response = chain.invoke({"subject": "physics", "concept": "galaxy"})print(response)
Expected JSON response:
Copy
Ask AI
{ "subject": "physics", "concept": "galaxy", "text": "A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, dark matter, and other astronomical objects..."}
The response object returns all template variables plus the generated text. You can further process it or store it for downstream tasks.
As your chains grow—adding memory, callbacks, or nested components—understanding the flow becomes challenging. That’s where debug logging shines.
LangChain offers a simple global switch to turn on detailed debug output. This logs each chain and component invocation, inputs, outputs, timing, and model parameters.
Copy
Ask AI
from langchain import debugfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chains import LLMChain# Enable debug logging across your applicationdebug(True)prompt = ChatPromptTemplate.from_messages([ ("system", "You are a {subject} teacher"), ("human", "Tell me about {concept}")])llm = ChatOpenAI(model_name="gpt-4")chain = LLMChain(llm=llm, prompt=prompt)# Invoke with debug enabledchain.invoke({"subject": "physics", "concept": "galaxy"})
When activated, your console will display output like:
Copy
Ask AI
[chain/start] [1:chain:LLMChain] Entering Chain run with input:{ "subject": "physics", "concept": "galaxy"}[llm/start] [1:chain:LLMChain > 2:llm:ChatOpenAI] Entering LLM run with input:{ "prompts": [ "System: You are a physics teacher\nHuman: Tell me about galaxy" ]}[llm/end] [1:chain:LLMChain > 2:llm:ChatOpenAI] [3.27s] Exiting LLM run with output:{ "generations": [ [ { "text": "Galaxies are vast systems of stars, dust, gas, and dark matter held together by gravity..." } ] ]}[chain/end] [1:chain:LLMChain] Exiting Chain run with output:{ "subject": "physics", "concept": "galaxy", "text": "Galaxies are vast systems of stars, dust, gas, and dark matter held together by gravity..."}
Remember to disable debug logging (debug(False)) in production to avoid verbose logs and potential performance overhead.