CrewAI Tutorial (TypeScript): implementing guardrails for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
crewaiimplementing-guardrails-for-intermediate-developerstypescript

This tutorial shows you how to add guardrails to a CrewAI workflow in TypeScript so agent output is validated before it reaches downstream systems. You need this when an agent can return malformed JSON, unsafe content, or responses that violate business rules and you want to fail fast instead of shipping bad data.

What You'll Need

  • Node.js 18+
  • A TypeScript project with ts-node or a build step
  • @langchain/openai
  • zod
  • dotenv
  • A valid OpenAI API key in OPENAI_API_KEY
  • CrewAI for TypeScript installed in your project
  • Basic familiarity with agents, tasks, and crews

Step-by-Step

  1. Start by installing the packages and wiring environment variables. I’m using Zod for schema validation because guardrails should be explicit and deterministic, not “best effort.”
npm install @langchain/openai zod dotenv
npm install -D typescript ts-node @types/node
// .env
OPENAI_API_KEY=your_openai_key_here
  1. Define the guardrail as a reusable validator. This example enforces a strict JSON shape for a customer support summary, which is the kind of thing you want before saving output into a CRM or ticketing system.
import { z } from "zod";

export const SupportSummarySchema = z.object({
  customerId: z.string().min(1),
  sentiment: z.enum(["positive", "neutral", "negative"]),
  issueType: z.string().min(1),
  nextAction: z.string().min(1),
});

export type SupportSummary = z.infer<typeof SupportSummarySchema>;

export function validateSupportSummary(input: unknown): SupportSummary {
  return SupportSummarySchema.parse(input);
}
  1. Create an agent and task that are constrained to emit structured output. The important part here is that the agent is instructed to produce JSON only, while your code still treats the model output as untrusted until the validator passes.
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "gpt-4o-mini",
  temperature: 0,
});

const prompt = `
Return ONLY valid JSON matching this schema:
{
  "customerId": "string",
  "sentiment": "positive" | "neutral" | "negative",
  "issueType": "string",
  "nextAction": "string"
}

Customer note:
Customer 123 says their card was declined twice and they want it reviewed.
`;

export async function generateRawSummary(): Promise<string> {
  const response = await llm.invoke(prompt);
  return typeof response.content === "string"
    ? response.content
    : JSON.stringify(response.content);
}
  1. Add the guardrail layer between generation and consumption. This is where the workflow becomes production-safe: parse the model output, validate it, and reject anything that does not match the contract.
import { validateSupportSummary } from "./guardrail";
import { generateRawSummary } from "./agent";

async function main() {
  const raw = await generateRawSummary();
  const parsed = JSON.parse(raw);
  const summary = validateSupportSummary(parsed);

  console.log("Validated summary:", summary);
}

main().catch((error) => {
  console.error("Guardrail failed:", error);
  process.exit(1);
});
  1. If you’re using a CrewAI-style multi-step flow, apply the same pattern at every boundary where data changes hands. The rule is simple: agents can suggest; validators decide.
type GuardrailResult<T> =
  | { ok: true; value: T }
  | { ok: false; error: string };

function safeParseJson(input: string): GuardrailResult<unknown> {
  try {
    return { ok: true, value: JSON.parse(input) };
  } catch (error) {
    return {
      ok: false,
      error: error instanceof Error ? error.message : "Invalid JSON",
    };
  }
}

async function runPipeline() {
  const raw = await generateRawSummary();
  const parsed = safeParseJson(raw);

  if (!parsed.ok) throw new Error(parsed.error);

  const validated = validateSupportSummary(parsed.value);
  return validated;
}

Testing It

Run the script with a valid prompt first and confirm you get a typed object back instead of raw text. Then deliberately break the schema by changing one field name in the prompt, such as returning customer_id instead of customerId, and verify that Zod throws before anything downstream runs.

You should also test malformed JSON by forcing the model to return prose instead of structured output. That failure path matters because most real incidents come from format drift, not just wrong content.

If you have an API route or queue consumer downstream, point it at this pipeline and confirm it only receives validated objects. That’s the real check: no unvalidated payload should cross your service boundary.

Next Steps

  • Add retry logic that reprompts the model when validation fails
  • Use stricter schemas with enums, regexes, and nested objects for regulated workflows
  • Extend this pattern to multi-agent handoffs so each agent validates incoming and outgoing payloads

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides