LangGraph Tutorial (TypeScript): debugging agent loops for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langgraphdebugging-agent-loops-for-intermediate-developerstypescript

This tutorial shows you how to diagnose and fix agent loops in a LangGraph TypeScript app. You’ll build a graph that can get stuck, add the right debug signals, and then stop the loop with explicit state checks and bounded retries.

What You'll Need

  • Node.js 18+ and npm
  • A TypeScript project with "type": "module" or equivalent ESM support
  • Packages:
    • @langchain/langgraph
    • @langchain/openai
    • @langchain/core
    • zod
  • An OpenAI API key in OPENAI_API_KEY
  • Basic familiarity with LangGraph nodes, edges, and state reducers

Step-by-Step

  1. Start with a graph that can loop forever if the model keeps asking for another tool call. The point here is to reproduce the failure mode before you fix it.
import { ChatOpenAI } from "@langchain/openai";
import { Annotation, END, StateGraph } from "@langchain/langgraph";
import { AIMessage, HumanMessage, ToolMessage } from "@langchain/core/messages";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const State = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
});

async function agentNode(state: typeof State.State) {
  const response = await llm.invoke(state.messages);
  return { messages: [response] };
}

async function toolNode(state: typeof State.State) {
  const last = state.messages[state.messages.length - 1] as AIMessage;
  return {
    messages: [new ToolMessage({
      content: "Tool result: account balance is $1250",
      tool_call_id: last.tool_calls?.[0]?.id ?? "missing-tool-call-id",
    })],
  };
}
  1. Build the graph with a conditional edge that routes back to the agent whenever the assistant asks for a tool. This is where loops usually happen when the exit condition is too weak or missing.
const shouldContinue = (state: typeof State.State) => {
  const last = state.messages[state.messages.length - 1];
  if (last instanceof AIMessage && last.tool_calls && last.tool_calls.length > 0) {
    return "tool";
  }
  return END;
};

const graph = new StateGraph(State)
  .addNode("agent", agentNode)
  .addNode("tool", toolNode)
  .addEdge("__start__", "agent")
  .addConditionalEdges("agent", shouldContinue, {
    tool: "tool",
    [END]: END,
  })
  .addEdge("tool", "agent")
  .compile();
  1. Add traceable debug output so you can see exactly why the graph keeps cycling. In production, this is what saves you from guessing whether the bug is in routing, state shape, or model behavior.
function logState(label: string, state: typeof State.State) {
  const summary = state.messages.map((m) => ({
    type: m.constructor.name,
    content: "content" in m ? String((m as any).content).slice(0, 80) : "",
    toolCalls: (m as any).tool_calls?.length ?? 0,
  }));

  console.log(`\n=== ${label} ===`);
  console.log(JSON.stringify(summary, null, 2));
}

async function run() {
  const input = {
    messages: [new HumanMessage("Check my balance and keep asking until you are sure.")],
  };

  logState("input", input);
}
  1. Fix the loop by adding an explicit loop counter to state and capping retries. This is the most reliable pattern when you need deterministic termination instead of hoping the model eventually stops.
const LoopState = Annotation.Root({
  messages: Annotation<any[]>({
    reducer: (left, right) => left.concat(right),
    default: () => [],
  }),
  iterations: Annotation<number>({
    reducer: (_, right) => right,
    default: () => 0,
  }),
});

async function countedAgentNode(state: typeof LoopState.State) {
  const response = await llm.invoke(state.messages);
   return {
    messages: [response],
    iterations: state.iterations + 1,
   };
}

const boundedShouldContinue = (state: typeof LoopState.State) => {
   if (state.iterations >= 3) return END;
   const last = state.messages[state.messages.length -1];
   if (last instanceof AIMessage && last.tool_calls?.length) return "tool";
   return END;
};
  1. Run the bounded graph and inspect the final messages. If it still loops after this change, your problem is almost always one of these three things: incorrect message accumulation, bad conditional routing, or a tool node returning malformed ToolMessage data.
const boundedGraph = new StateGraph(LoopState)
   .addNode("agent", countedAgentNode)
   .addNode("tool", toolNode)
   .addEdge("__start__", "agent")
   .addConditionalEdges("agent", boundedShouldContinue, {
     tool: "tool",
     [END]: END,
   })
   .addEdge("tool", "agent")
   .compile();

async function main() {
   const result = await boundedGraph.invoke({
     messages: [new HumanMessage("What is my balance?")],
     iterations:0,
   });

   console.log("\nFinal output:");
   console.log(result.messages.map((m) => m.constructor.name));
}

main().catch(console.error);

Testing It

Run the script with tsx, ts-node, or your normal build pipeline. First test with a prompt that should terminate quickly, then test with a prompt that tends to trigger repeated tool use.

Watch for three things in the logs:

  • The same node firing repeatedly without new information
  • tool_calls staying present on every assistant turn
  • iterations hitting your cap and ending cleanly

If the graph ends at END after three cycles and prints a finite message list, your loop guard works. If it still spins, inspect whether your reducer is appending duplicate messages or whether your conditional edge always returns "tool".

Next Steps

  • Add structured tracing with LangSmith so you can inspect node transitions instead of relying on console logs.
  • Move from a single iterations cap to per-tool budgets, which is better for banking workflows where some tools are safe to retry and others are not.
  • Add schema validation on agent outputs so malformed assistant responses fail fast before they create routing bugs.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides