CrewAI Tutorial (Python): adding memory to agents for advanced developers

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-memory-to-agents-for-advanced-developerspython

This tutorial shows you how to add persistent memory to CrewAI agents in Python so they can retain useful context across tasks and sessions. You need this when your agent has to remember customer preferences, prior case details, or previous tool outputs instead of treating every run like a blank slate.

What You'll Need

  • Python 3.10+
  • crewai
  • crewai-tools
  • python-dotenv
  • OpenAI API key set as OPENAI_API_KEY
  • A local project where you can run and edit Python files
  • Optional but recommended:
    • chromadb for persistent long-term memory
    • embedchain if you want richer retrieval patterns later

Install the basics:

pip install crewai crewai-tools python-dotenv chromadb

Step-by-Step

  1. Start with a clean CrewAI setup and load your API key from environment variables. For production code, keep secrets out of source files and use .env locally.
import os
from dotenv import load_dotenv

load_dotenv()

if not os.getenv("OPENAI_API_KEY"):
    raise ValueError("OPENAI_API_KEY is not set")

print("Environment ready")
  1. Define an agent with memory enabled. CrewAI supports memory at the crew level, but the agent still needs clear instructions so it knows how to use remembered context instead of hallucinating it.
from crewai import Agent

support_agent = Agent(
    role="Customer Support Specialist",
    goal="Help users based on current input and remembered context",
    backstory=(
        "You support banking customers and should remember prior preferences, "
        "case details, and constraints when relevant."
    ),
    verbose=True,
    allow_delegation=False,
)
  1. Create tasks that benefit from memory. The first task stores context, and the second task reuses it without forcing the user to repeat themselves.
from crewai import Task

collect_profile = Task(
    description=(
        "Ask for and summarize the user's preferred communication channel, "
        "risk tolerance, and account type."
    ),
    expected_output="A concise customer profile summary.",
    agent=support_agent,
)

follow_up = Task(
    description=(
        "Use the stored profile to recommend the next best support action "
        "without asking the user to repeat their preferences."
    ),
    expected_output="A recommendation that references remembered context.",
    agent=support_agent,
)
  1. Wire the agent into a crew with short-term, long-term, and entity memory enabled. This is the part that actually gives you persistence across task execution patterns.
from crewai import Crew, Process

crew = Crew(
    agents=[support_agent],
    tasks=[collect_profile, follow_up],
    process=Process.sequential,
    memory=True,
    verbose=True,
)
  1. Run the crew with a shared input payload. In real systems, this payload would come from your app layer, ticketing system, or CRM session state.
result = crew.kickoff(
    inputs={
        "customer_name": "Amina",
        "preferred_channel": "email",
        "risk_tolerance": "low",
        "account_type": "savings",
    }
)

print("\n=== Final Result ===\n")
print(result)
  1. If you want stronger persistence across runs, configure an explicit storage backend. For advanced use cases, this is where you move from “memory during a single process” to “memory across sessions.”
from crewai import Crew, Process
from crewai.memory import LongTermMemory

persistent_crew = Crew(
    agents=[support_agent],
    tasks=[collect_profile, follow_up],
    process=Process.sequential,
    memory=True,
)

# Example pattern: attach long-term memory through your storage layer.
# Use this when running in an environment where you persist state between sessions.
persistent_crew.long_term_memory = LongTermMemory()

print("Persistent memory configured")

Testing It

Run the script twice with slightly different inputs and check whether the second run produces responses that reflect earlier context. You should see the agent avoid asking for already-known details if your task wording encourages reuse of memory.

If you are using persistence correctly, stop the process and rerun it with the same customer identifier or session key in your app layer. The output should remain consistent with previously stored preferences instead of resetting completely.

Watch for two failure modes: vague prompts that never tell the agent what to remember, and overly broad memory that pulls in irrelevant history. In production, keep memory scoped by user ID, case ID, or policy number so one customer’s context never bleeds into another’s.

Next Steps

  • Add a vector store-backed retriever so agents can recall structured case notes from past interactions.
  • Combine CrewAI memory with tools that read from CRM or policy systems before generating answers.
  • Add per-user session keys and retention rules so memory stays compliant in regulated environments.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides