CrewAI Tutorial (Python): adding memory to agents for beginners

By Cyprian AaronsUpdated 2026-04-21
crewaiadding-memory-to-agents-for-beginnerspython

This tutorial shows you how to give CrewAI agents memory in Python so they can remember prior context across tasks and conversations. You need this when a one-shot agent keeps forgetting user preferences, previous decisions, or case details between runs.

What You'll Need

  • Python 3.10+
  • A CrewAI project installed locally
  • An OpenAI API key
  • These packages:
    • crewai
    • crewai-tools
    • python-dotenv
  • A .env file with your API key:
    • OPENAI_API_KEY=your_key_here

Step-by-Step

  1. Start with a clean project and install the dependencies. If you already have a CrewAI app, just add the memory-related setup on top of it.
pip install crewai crewai-tools python-dotenv
  1. Create a minimal environment file and load it in Python. This keeps your API key out of source control and makes the script easy to run locally.
from dotenv import load_dotenv

load_dotenv()

import os
print("API key loaded:", bool(os.getenv("OPENAI_API_KEY")))
  1. Define an agent with memory enabled. In CrewAI, memory is turned on at the crew level, while the agent still needs clear instructions about what it should remember and how to use that context.
from crewai import Agent, Task, Crew, Process
from dotenv import load_dotenv

load_dotenv()

research_agent = Agent(
    role="Customer Support Assistant",
    goal="Help users consistently based on prior conversation context",
    backstory=(
        "You are a support assistant that remembers user preferences, "
        "previous issues, and resolved cases."
    ),
    verbose=True,
    allow_delegation=False,
)

task = Task(
    description=(
        "Answer the user's question using any remembered context from "
        "previous interactions when relevant."
    ),
    expected_output="A helpful support response that uses memory when needed.",
    agent=research_agent,
)
  1. Build the crew with memory enabled. This is the part that actually gives your agents access to short-term and long-term context across executions.
crew = Crew(
    agents=[research_agent],
    tasks=[task],
    process=Process.sequential,
    memory=True,
    verbose=True,
)
  1. Run the crew with a user prompt and inspect the result. To see memory in action, run the same script multiple times or change the prompt slightly while keeping related context.
result = crew.kickoff(
    inputs={
        "user_message": (
            "The customer prefers concise answers and previously asked "
            "about resetting their password."
        )
    }
)

print("\n=== RESULT ===\n")
print(result)
  1. If you want a more realistic beginner setup, wrap this in a reusable function and keep the same crew object for repeated calls during a session. That makes it easier to test whether memory is influencing later responses.
def ask_support(question: str):
    return crew.kickoff(inputs={"user_message": question})

first = ask_support("The customer likes short answers.")
second = ask_support("What should I say if they ask about account recovery?")

print(first)
print(second)

Testing It

Run the script once with a prompt that includes some user preference or case detail, then run it again with a follow-up question that depends on that detail. If memory is working, the second response should reflect earlier context instead of treating each run like a blank slate.

For example, tell the agent that the user prefers short answers, then ask for another reply in the same session. You should see responses stay consistent with that preference.

If you want stronger verification, set verbose=True and watch the logs for memory-related retrieval behavior. The exact output depends on your CrewAI version, but you should see the agent using prior context rather than re-asking basic questions.

Next Steps

  • Add tools like web search or database lookup so memory works alongside live data.
  • Explore persistent storage options for longer-lived conversational history.
  • Learn how to separate short-term task memory from long-term user profile memory for production use

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides