AutoGen Tutorial (TypeScript): persisting agent state for beginners

By Cyprian AaronsUpdated 2026-04-21
autogenpersisting-agent-state-for-beginnerstypescript

This tutorial shows how to save and reload an AutoGen agent’s state in TypeScript so a conversation can survive process restarts. You need this when you want long-running assistants, resumable workflows, or any agent that should keep context without relying on one live Node process.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • AutoGen packages:
    • @autogen/core
    • @autogen/openai
  • An OpenAI API key set in OPENAI_API_KEY
  • Basic familiarity with AutoGen agents, model clients, and message passing

Step-by-Step

  1. Create a small TypeScript project and install the packages.
    Keep this minimal so the persistence logic is easy to see.
npm init -y
npm install @autogen/core @autogen/openai
npm install -D typescript tsx @types/node
  1. Set up a model client and a persistent agent.
    The important part is that the agent keeps its internal state in memory during runtime, and we’ll serialize that state to disk ourselves.
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";

const modelClient = new OpenAIChatCompletionClient({
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
});

export const agent = new AssistantAgent({
  name: "support_agent",
  modelClient,
  systemMessage: "You are a concise support assistant.",
});
  1. Add state save/load helpers using JSON files.
    For beginners, file-based persistence is the easiest way to understand the pattern before moving to Redis, Postgres, or blob storage.
import { readFile, writeFile } from "node:fs/promises";

const STATE_FILE = "./agent-state.json";

export async function saveAgentState(agent: AssistantAgent) {
  const state = await agent.saveState();
  await writeFile(STATE_FILE, JSON.stringify(state, null, 2), "utf-8");
}

export async function loadAgentState(agent: AssistantAgent) {
  try {
    const raw = await readFile(STATE_FILE, "utf-8");
    const state = JSON.parse(raw);
    await agent.loadState(state);
    return true;
  } catch {
    return false;
  }
}
  1. Wire the agent into a runnable script that restores state before chatting.
    This is the key flow: start the process, load prior state if it exists, run one turn, then persist again after the response.
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { readFile, writeFile } from "node:fs/promises";

const STATE_FILE = "./agent-state.json";

async function main() {
  const modelClient = new OpenAIChatCompletionClient({
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
  });

  const agent = new AssistantAgent({
    name: "support_agent",
    modelClient,
    systemMessage: "You are a concise support assistant.",
  });

  try {
    const raw = await readFile(STATE_FILE, "utf-8");
    await agent.loadState(JSON.parse(raw));
    console.log("Loaded previous agent state.");
  } catch {
    console.log("No saved state found. Starting fresh.");
  }

  const result = await agent.run([{ role: "user", content: "Remember my name is Amina." }]);
  console.log(result.messages.at(-1));

  const state = await agent.saveState();
  await writeFile(STATE_FILE, JSON.stringify(state, null, 2), "utf-8");
}

main().catch(console.error);
  1. Add a second run to prove the memory survives restart.
    After the first execution writes state to disk, run the same script again with a follow-up prompt and confirm the agent still has access to prior context.
import { AssistantAgent } from "@autogen/core";
import { OpenAIChatCompletionClient } from "@autogen/openai";
import { readFile, writeFile } from "node:fs/promises";

const STATE_FILE = "./agent-state.json";

async function main() {
  const modelClient = new OpenAIChatCompletionClient({
    model: "gpt-4o-mini",
    apiKey: process.env.OPENAI_API_KEY,
  });

  const agent = new AssistantAgent({
    name: "support_agent",
    modelClient,
    systemMessage: "You are a concise support assistant.",
  });

  try {
    const raw = await readFile(STATE_FILE, "utf-8");
    await agent.loadState(JSON.parse(raw));
  } catch {}

  const result = await agent.run([{ role: "user", content: "What is my name?" }]);
  console.log(result.messages.at(-1));

  const state = await agent.saveState();
  await writeFile(STATE_FILE, JSON.stringify(state, null, 2), "utf-8");
}

main().catch(console.error);

Testing It

Run the script once with a message like “Remember my name is Amina.” Then run it again with “What is my name?” and check whether the response uses the stored context. If it does not, inspect whether agent.saveState() actually wrote agent-state.json and whether agent.loadState() runs before your next turn.

A good sanity check is deleting agent-state.json and confirming the assistant starts cleanly again. That tells you your persistence layer is really controlling continuity instead of some hidden local cache.

Next Steps

  • Move persistence from a flat file to Redis or Postgres so multiple app instances can share state.
  • Persist per-user conversation state by keying storage on user ID or session ID.
  • Add versioning to your saved state so you can migrate older conversations safely when your agent schema changes.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides