Learn to generate model output with LangChain and transform plain-text responses into structured JSON using the JsonOutputParser for improved productivity.
In this guide, you’ll learn how to generate model output with LangChain and transform plain-text responses into structured JSON using the JsonOutputParser. This approach removes manual parsing, ensures valid JSON, and boosts developer productivity.
First, create a simple prompt template to list three countries and their capitals:
Copy
Ask AI
prompt = PromptTemplate( template="List 3 countries in {continent} and their capitals", input_variables=["continent"])raw_output = llm.invoke(input=prompt.format(continent="Asia"))print(raw_output)
Example response:
Copy
Ask AI
1. Japan – Tokyo2. China – Beijing3. India – New Delhi
While human-readable, this plain-text format is hard to consume programmatically.
Embed the format instructions into your template to enforce JSON output:
Copy
Ask AI
prompt = PromptTemplate( template=( "List 3 countries in {continent} and their capitals\n" "{format_instructions}" ), input_variables=["continent"], partial_variables={"format_instructions": format_instructions})print(prompt.format(continent="North America"))
Resulting prompt:
Copy
Ask AI
List 3 countries in North America and their capitalsReturn a JSON object.
By incorporating the JsonOutputParser and embedding format instructions in your LangChain prompts, you can automatically obtain structured JSON from your LLM calls. This streamlines data handling in Python and eliminates fragile text parsing.