LangGraph Tutorial (TypeScript): debugging agent loops for advanced developers
This tutorial shows you how to instrument a LangGraph agent in TypeScript so you can see exactly why it loops, where state changes, and which node keeps re-triggering. You need this when a graph looks “stuck” in production: the model keeps calling the same tool, conditional edges route back forever, or your state reducer is silently preserving bad data.
What You'll Need
- •Node.js 18+
- •TypeScript 5+
- •
@langchain/langgraph - •
@langchain/openai - •
dotenv - •An OpenAI API key in
OPENAI_API_KEY - •A project configured for ESM or
ts-node/tsx
Install dependencies:
npm install @langchain/langgraph @langchain/openai dotenv
npm install -D typescript tsx @types/node
Step-by-Step
- •Start with a graph that can loop, then add debug hooks around every state transition. The goal is not to prevent loops yet; it’s to make the loop visible and attributable to a specific node and state value.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import {
Annotation,
END,
START,
StateGraph,
} from "@langchain/langgraph";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
const AgentState = Annotation.Root({
messages: Annotation<any[]>({
reducer: (left, right) => left.concat(right),
default: () => [],
}),
iterations: Annotation<number>({
reducer: (_, right) => right,
default: () => 0,
}),
});
- •Build a node that increments a counter and intentionally creates the kind of loop you want to debug. In real systems this is usually a tool-calling edge that never reaches a terminal condition.
async function agentNode(state: typeof AgentState.State) {
const response = await llm.invoke(state.messages);
return {
messages: [response],
iterations: state.iterations + 1,
};
}
async function debugNode(state: typeof AgentState.State) {
const last = state.messages[state.messages.length - 1];
console.log("DEBUG STATE", {
iterations: state.iterations,
lastType: last?.constructor?.name,
lastContent:
typeof last?.content === "string" ? last.content : JSON.stringify(last?.content),
});
return {};
}
- •Add a routing function that detects whether the model is still asking for more work. This is where most loops come from: your condition never returns
END, or it depends on message content that never changes.
function shouldContinue(state: typeof AgentState.State) {
if (state.iterations >= 3) return END;
const last = state.messages[state.messages.length - 1];
const content = typeof last?.content === "string" ? last.content : "";
if (content.toLowerCase().includes("done")) return END;
return "debug";
}
- •Wire the graph with explicit debug checkpoints before and after the agent node. This gives you a trace of each iteration without needing to guess which edge fired.
const graph = new StateGraph(AgentState)
.addNode("debug", debugNode)
.addNode("agent", agentNode)
.addEdge(START, "debug")
.addEdge("debug", "agent")
.addConditionalEdges("agent", shouldContinue);
const app = graph.compile();
const input = {
messages: [new HumanMessage("Keep going until you are done.")],
};
- •Run the graph with streaming so you can inspect each update as it happens. For loop debugging, streaming beats waiting for the final result because you can see the exact state at the moment the cycle repeats.
for await (const event of app.streamEvents(input, { version: "v2" })) {
if (event.event === "on_chain_end" || event.event === "on_chat_model_end") {
console.log("EVENT", event.name);
continue;
}
if (event.event === "on_chain_start" || event.event === "on_chat_model_start") {
console.log("START", event.name);
continue;
}
}
const result = await app.invoke(input);
console.log("FINAL RESULT");
console.dir(result, { depth: null });
Testing It
Run the file with npx tsx your-file.ts. You should see repeated DEBUG STATE logs followed by either an END transition or termination after three iterations from the guardrail in shouldContinue.
If it keeps looping past your expected stop condition, check three things first:
- •Your reducer is not preserving stale messages or counters
- •Your conditional edge actually returns
END - •The model output format matches what your router expects
For deeper inspection, temporarily print the full state.messages array and compare each iteration. In production debugging, I also recommend adding an explicit maxIterations field in state so loops fail closed instead of burning tokens.
Next Steps
- •Add tool nodes and inspect how tool-call messages affect routing decisions
- •Use LangGraph checkpointing to persist loop state across retries
- •Add structured outputs so routing decisions do not depend on free-form text
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit