CrewAI Tutorial (TypeScript): deploying to AWS Lambda for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaideploying-to-aws-lambda-for-advanced-developerstypescript

This tutorial shows how to package a TypeScript CrewAI agent as an AWS Lambda function, wire it to API Gateway, and keep the deployment production-safe. You need this when your agent should run on demand, stay stateless, and fit into a serverless cost model instead of sitting on a long-lived process.

What You'll Need

  • Node.js 20+
  • AWS account with permissions for:
    • Lambda
    • API Gateway
    • IAM role creation
    • CloudWatch Logs
  • AWS CLI configured locally
  • A CrewAI-compatible TypeScript project
  • OpenAI API key exported as OPENAI_API_KEY
  • These packages:
    • crewai
    • @aws-sdk/client-lambda not required for this tutorial
    • esbuild
    • typescript
    • @types/aws-lambda
  • A basic understanding of:
    • Lambda handler signatures
    • JSON event payloads
    • IAM execution roles

Step-by-Step

  1. Create a minimal TypeScript project and install dependencies. Keep the runtime small because Lambda cold starts get worse as your bundle grows.
mkdir crewai-lambda && cd crewai-lambda
npm init -y
npm install crewai
npm install -D typescript esbuild @types/aws-lambda @types/node
npx tsc --init
  1. Add a CrewAI agent that can run inside Lambda without any local state. The important part is to read configuration from environment variables and return plain JSON.
// src/agent.ts
import { Agent, Task, Crew } from "crewai";

export async function runCrew(prompt: string): Promise<string> {
  const agent = new Agent({
    role: "Compliance Analyst",
    goal: "Review the user's request and produce a concise risk assessment",
    backstory: "You analyze requests for operational and compliance risk in regulated environments.",
    verbose: false,
    llm: "gpt-4o-mini",
  });

  const task = new Task({
    description: `Analyze this request and summarize risks:\n\n${prompt}`,
    expectedOutput: "A short risk summary with actionable notes.",
    agent,
  });

  const crew = new Crew({
    agents: [agent],
    tasks: [task],
    verbose: false,
  });

  const result = await crew.kickoff();
  return String(result);
}
  1. Expose the agent through a Lambda handler. This handler accepts either API Gateway proxy events or direct test events, which makes local verification easier.
// src/handler.ts
import type { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { runCrew } from "./agent";

export const handler: APIGatewayProxyHandlerV2 = async (event) => {
  const body = event.body ? JSON.parse(event.body) : {};
  const prompt = body.prompt ?? "Summarize the operational risk of exposing an internal API.";

  try {
    const output = await runCrew(prompt);
    return {
      statusCode: 200,
      headers: { "content-type": "application/json" },
      body: JSON.stringify({ output }),
    };
  } catch (err) {
    console.error(err);
    return {
      statusCode: 500,
      headers: { "content-type": "application/json" },
      body: JSON.stringify({ error: "Agent execution failed" }),
    };
  }
};
  1. Add build scripts and compile to a single Lambda-friendly bundle. For serverless deployment, bundling matters more than TypeScript purity because Lambda wants one deployable artifact.
{
  "name": "crewai-lambda",
  "private": true,
  "type": "module",
  "scripts": {
    "build": "esbuild src/handler.ts --bundle --platform=node --target=node20 --format=esm --outfile=dist/index.mjs",
    "zip": "cd dist && zip lambda.zip index.mjs"
  }
}
  1. Deploy the bundle to Lambda with an execution role that can write logs. Use an environment variable for your API key; do not bake secrets into the artifact.
npm run build

aws iam create-role \
  --role-name crewai-lambda-role \
  --assume-role-policy-document '{
    "Version":"2012-10-17",
    "Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]
  }'

aws iam attach-role-policy \
  --role-name crewai-lambda-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

ROLE_ARN=$(aws iam get-role --role-name crewai-lambda-role --query 'Role.Arn' --output text)

aws lambda create-function \
  --function-name crewai-agent-ts \
  --runtime nodejs20.x \
  --handler index.handler \
  --role "$ROLE_ARN" \
  --zip-file fileb://dist/lambda.zip \
  --environment Variables="{OPENAI_API_KEY=$OPENAI_API_KEY}"
  1. Put API Gateway in front of it if you want HTTP access. This gives you a clean POST endpoint that can trigger the agent from your app or internal tooling.
API_ID=$(aws apigatewayv2 create-api \
  --name crewai-agent-api \
  --protocol-type HTTP \
  --query 'ApiId' --output text)

INTEGRATION_ID=$(aws apigatewayv2 create-integration \
  --api-id "$API_ID" \
   --integration-type AWS_PROXY \
   --integration-uri "$(aws lambda get-function --function-name crewai-agent-ts --query 'Configuration.FunctionArn' --output text)" \
   --payload-format-version '2.0' \
   --query 'IntegrationId' --output text)

aws apigatewayv2 create-route \
  --api-id "$API_ID" \
   --route-key 'POST /run' \
   --target "integrations/$INTEGRATION_ID"

Testing It

First, invoke the Lambda directly with a test payload and confirm you get back JSON with an output field. Then check CloudWatch Logs for the full stack trace if something fails; most issues here are usually missing env vars, bad bundling, or IAM permissions.

If you're using API Gateway, send a POST request to /run with a JSON body like {"prompt":"Assess the risk of storing customer PII in logs."}. You should get a response within your Lambda timeout window, and if latency is too high, reduce prompt size or move nonessential work out of the request path.

For production validation, test three cases:

  • Normal prompt input
  • Empty body fallback behavior
  • Failure path when OPENAI_API_KEY is missing

That tells you whether your handler is robust enough for real traffic instead of just passing a happy-path demo.

Next Steps

  • Add structured logging with request IDs so you can trace individual agent runs in CloudWatch.
  • Split orchestration from inference by moving tool-heavy workflows into Step Functions when execution time gets unpredictable.
  • Add provisioned concurrency if cold starts are hurting your p95 latency on customer-facing endpoints.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides