Using LangGraph to Create AI Workflows

Using LangGraph to Create AI Workflows
LangGraph.js is a powerful framework for building AI workflows using graph-based structures. It provides developers with tools to design, debug, and deploy complex AI systems with ease. In this blog, we’ll explore how LangGraph can be used to create sophisticated AI workflows, including its key features and practical examples.
Why Use LangGraph?
LangGraph offers several advantages for building AI workflows:
- Graph-based Design: Workflows are represented as graphs, making it easy to visualize and manage complex systems.
- Modularity: Nodes and edges in the graph can be reused, enabling modular design.
- Persistence: Built-in support for state persistence allows workflows to resume from checkpoints.
- Human-in-the-Loop: Integrate human decision-making at critical points in the workflow.
- Streaming: Stream intermediate results for real-time feedback.
Getting Started with LangGraph
Prerequisites
To follow along, ensure you have the following:
- Node.js (v18 or newer)
- API keys for OpenAI and Tavily (if using their services)
Install the required packages:
npm install @langchain/core @langchain/langgraph @langchain/openai
Creating Your First Workflow
Here’s how to create a simple AI workflow using LangGraph:
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
// Define the state schema
const StateAnnotation = Annotation.Root({
messages: Annotation({
reducer: (x, y) => x.concat(y)
})
});
// Define nodes
const callModel = async (state) => {
const { messages } = state;
const response = await new ChatOpenAI({ model: "gpt-4" }).invoke(messages);
return { messages: [response] };
};
// Build the graph
const workflow = new StateGraph(StateAnnotation)
.addNode("callModel", callModel)
.addEdge("__start__", "callModel")
.addEdge("callModel", "__end__")
.compile();
// Execute the workflow
const result = await workflow.invoke({ messages: ["Hello, AI!"] });
console.log(result);
Key Features
Human-in-the-Loop
LangGraph allows you to pause workflows for human input. For example:
import { interrupt } from "@langchain/langgraph";
const humanInputNode = (state) => {
const userInput = interrupt("Please provide input:");
return { messages: [userInput] };
};
Persistence
Use the MemorySaver
to persist workflow state:
import { MemorySaver } from "@langchain/langgraph";
const memory = new MemorySaver();
const workflow = new StateGraph(StateAnnotation)
.addNode("callModel", callModel)
.addEdge("__start__", "callModel")
.compile({ checkpointer: memory });
Streaming
Stream intermediate results for real-time feedback:
for await (const event of workflow.stream({ messages: ["Stream this!"] })) {
console.log(event);
}
Advanced Use Cases
Multi-Agent Systems
LangGraph supports multi-agent workflows where agents collaborate to solve tasks. Below, we will build a sophisticated example involving three specialized agents: a researcher, a writer, and a critic. Each agent has a specific role defined by a system prompt, and the workflow iterates until the critic provides no further feedback.
Step 1: Setting up the Imports and Models
First, let’s set up our imports and configure the language model we’ll use for our agents:
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
const MAX_REVISION_CYCLES = 2; // Allows for 1 initial write + 2 revisions = 3 total writer attempts
// Initialize the language model
const llm = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY
model: "gpt-4.1-nano",
temperature: 0.7
});
Step 2: Research Agent
The researchAgent
is responsible for researching a given topic and providing five concise and relevant facts. A system prompt defines its role:
const researchSystemPrompt = `You are a Research Agent specialized in gathering factual information.
Your task is to provide 5 concise, well-researched facts about a given topic.
Make your facts specific, accurate, and informative.
Each fact should be 1-2 sentences long.
Focus on delivering substantive information rather than general knowledge.`;
const researchAgent = async (state) => {
const { topic } = state;
console.log(`--- researchAgent: Received topic: ${topic} ---`);
try {
console.log("--- researchAgent: Calling LLM ---");
const response = await llm.invoke([
new SystemMessage(researchSystemPrompt),
new HumanMessage(
`Research the following topic and provide 5 key facts: ${topic}`
)
]);
console.log("--- researchAgent: Got response from LLM: ---");
// Extract facts from the response
const content = response.content;
if (typeof content !== "string") {
console.error(
"--- researchAgent: ERROR - response content is not a string! ---"
);
return { facts: "", error: "Invalid content type from LLM" };
}
const returnValue = { facts: content };
console.log("--- researchAgent: Returning Facts: ---");
return returnValue;
} catch (error) {
console.error(
"--- researchAgent: ERROR during llm.invoke or processing ---",
error
);
throw error;
}
};
Step 3: Writer Agent
The writerAgent
takes the facts provided by the researchAgent
and crafts them into a coherent article:
const writerSystemPrompt = `You are a Writer Agent that crafts engaging articles.
Using the facts provided to you, create a well-structured article.
Your article should have:
- A compelling introduction
- Well-organized body paragraphs that elaborate on each fact
- A thoughtful conclusion
- Smooth transitions between ideas
Aim for clarity, coherence, and an engaging style.`;
const writerAgent = async (state) => {
console.log("--- writerAgent: Entered ---");
const { facts, feedback } = state;
console.log("--- writerAgent: Received Facts ---");
console.log(
`--- writerAgent: Received ${feedback ? "Feedback" : "No Feedback"} ---`
);
let prompt = `Write an informative article based on these facts:
${facts}
`;
if (feedback) {
prompt += `\\nPlease revise the article according to this feedback: ${feedback}`;
}
try {
console.log("--- writerAgent: Calling LLM ---");
const response = await llm.invoke([
new SystemMessage(writerSystemPrompt),
new HumanMessage(prompt)
]);
const returnValue = { article: response.content };
console.log("--- writerAgent: Returning Article: ---");
return returnValue;
} catch (error) {
console.error("--- writerAgent: ERROR during LLM Call ---", error);
throw error;
}
};
Step 4: Critic Agent
The criticAgent
analyzes the article and provides constructive feedback for improvement:
const criticSystemPrompt = `You are a Critic Agent with expertise in content evaluation.
Assess the article objectively and provide constructive feedback on:
- Clarity and coherence
- Use of supporting details
- Overall structure and flow
- Language and style
If improvements are needed, be specific about what changes would enhance the article.
If the article is good and needs no changes, simply respond with "Looks good".`;
const criticAgent = async (state) => {
const { article, revisionCount } = state;
console.log(`--- criticAgent: Entered with revision: ${revisionCount} ---`);
const response = await llm.invoke([
new SystemMessage(criticSystemPrompt),
new HumanMessage(
`Please review this article and provide specific feedback for improvement:\n\n${article}`
)
]);
const feedback = response.content;
const needsImprovement = !feedback.toLowerCase().includes("looks good");
let newRevisionCount = revisionCount;
if (needsImprovement) {
newRevisionCount = revisionCount + 1;
console.log(
`--- criticAgent: Improvement needed. New revisionCount: ${newRevisionCount} ---`
);
return {
feedback,
needsImprovement: true,
revisionCount: newRevisionCount
};
} else {
console.log("--- criticAgent: Article looks good. ---");
return {
feedback,
needsImprovement: false,
revisionCount: revisionCount
};
}
};
Step 5: Building the Workflow
Now let’s connect all agents in a workflow that iterates until the critic is satisfied with the article or until we reach a maximum of three iterations:
// Define the state schema with appropriate annotations
const ContentWorkflowState = Annotation.Root({
topic: Annotation(),
facts: Annotation({
default: () => ""
}),
article: Annotation({
default: () => ""
}),
feedback: Annotation({
default: () => ""
}),
needsImprovement: Annotation({
default: () => false
}),
revisionCount: Annotation({ default: () => 0 })
});
// Build the workflow graph
const contentWorkflow = new StateGraph(ContentWorkflowState)
.addNode("researchAgent", researchAgent)
.addNode("writerAgent", writerAgent)
.addNode("criticAgent", criticAgent)
// Set up the initial flow: research -> write -> critique
.addEdge("__start__", "researchAgent")
.addEdge("researchAgent", "writerAgent")
.addEdge("writerAgent", "criticAgent")
// Add conditional logic: either end or go back to writer for revisions
.addConditionalEdges("criticAgent", (state) => {
if (state.needsImprovement && state.revisionCount <= MAX_REVISION_CYCLES) {
console.log(
"--- Conditional Edge: Routing to writerAgent for improvement ---"
);
return "writerAgent";
} else {
if (state.needsImprovement && state.revisionCount > MAX_REVISION_CYCLES) {
console.log(
"--- Conditional Edge: Max revisions reached. Routing to end ---"
);
} else {
console.log(
"--- Conditional Edge: No improvement needed. Routing to end ---"
);
}
return "__end__";
}
})
.compile();
Step 6: Testing the Workflow
To see how our workflow functions, let’s run it and examine the output:
// The invoke function returns the final state
// Let's add some code to track the iterations for demonstration purposes
console.log("starting agent workflow");
let currentState = {
topic: "LangGraph and Multi-Agent Systems",
revisionCount: 0
};
const finalState = await contentWorkflow.invoke(currentState).catch((error) => {
console.error("\n--- Workflow Invocation Error ---");
console.error("Error details:", error);
});
if (finalState) {
console.log("\n--- Result ---\n");
console.log("\nArticle:", finalState.article);
} else {
console.log(
"\n--- Workflow did not complete successfully or an error occurred ---"
);
}
This example demonstrates how LangGraph can be used to create a sophisticated multi-agent workflow. Each agent has a specific role defined by a system prompt, and the workflow iterates until the critic is satisfied with the article. The key benefits of using LangGraph for this type of workflow include:
- Explicit flow control between agents
- State persistence between iterations
- Conditional branching based on agent outputs
- Modular design that allows for easy agent replacement or modification
Conclusion
LangGraph.js is a versatile framework for building AI workflows. Its graph-based design, modularity, and advanced features like persistence and human-in-the-loop make it an excellent choice for developers. Start experimenting with LangGraph today and unlock the potential of AI workflows!