LangGraph Tutorial (Python): deploying with Docker for intermediate developers

By Cyprian AaronsUpdated 2026-04-21
langgraphdeploying-with-docker-for-intermediate-developerspython

This tutorial shows you how to package a LangGraph Python app into a Docker image and run it consistently across local machines, CI, and servers. You need this when your graph works locally but you want a repeatable deployment path with pinned dependencies, environment variables, and a clean runtime.

What You'll Need

  • Python 3.11 or newer
  • Docker Desktop or Docker Engine installed
  • An OpenAI API key set as OPENAI_API_KEY
  • These Python packages:
    • langgraph
    • langchain-openai
    • python-dotenv if you want local .env loading
  • Basic familiarity with:
    • StateGraph
    • nodes, edges, and START / END
    • running Python apps from the command line

Step-by-Step

  1. Start with a minimal LangGraph app that can run as a normal Python script. Keep the graph small and deterministic so you can verify Docker behavior before adding more nodes.
from typing import TypedDict

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI


class State(TypedDict):
    question: str
    answer: str


llm = ChatOpenAI(model="gpt-4o-mini")


def answer_node(state: State) -> dict:
    response = llm.invoke(state["question"])
    return {"answer": response.content}


builder = StateGraph(State)
builder.add_node("answer", answer_node)
builder.add_edge(START, "answer")
builder.add_edge("answer", END)

graph = builder.compile()

if __name__ == "__main__":
    result = graph.invoke({"question": "What is LangGraph?"})
    print(result["answer"])
  1. Add a dependency file so Docker installs the same versions every time. This avoids the common mistake of building images that work once and then break because a transitive dependency changed.
langgraph>=0.2.0
langchain-openai>=0.1.20
python-dotenv>=1.0.1
  1. Create a Dockerfile that copies your app, installs dependencies, and runs the script. Use a slim Python base image and keep the image focused on runtime only.
FROM python:3.11-slim

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY app.py .

CMD ["python", "app.py"]
  1. Store secrets in environment variables, not in code or the image. For local development, you can export them in your shell; for production, inject them through your container runtime or orchestrator.
export OPENAI_API_KEY="your_api_key_here"
docker build -t langgraph-demo .
docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  langgraph-demo
  1. If you want a cleaner production setup, separate configuration from logic and make the container entrypoint explicit. This gives you room to add multiple graphs, health checks, or an API layer later.
import os
from typing import TypedDict

from langgraph.graph import StateGraph, START, END
from langchain_openai import ChatOpenAI


class State(TypedDict):
    question: str
    answer: str


def build_graph():
    llm = ChatOpenAI(
        model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
        api_key=os.environ["OPENAI_API_KEY"],
    )

    def answer_node(state: State) -> dict:
        response = llm.invoke(state["question"])
        return {"answer": response.content}

    builder = StateGraph(State)
    builder.add_node("answer", answer_node)
    builder.add_edge(START, "answer")
    builder.add_edge("answer", END)
    return builder.compile()


if __name__ == "__main__":
    graph = build_graph()
    print(graph.invoke({"question": "Give me one sentence about Docker."})["answer"])
  1. Build and run again with an explicit model override if needed. This pattern is useful when you want to promote the same image through dev, staging, and prod while changing only environment variables.
docker build -t langgraph-demo .
docker run --rm \
  -e OPENAI_API_KEY="$OPENAI_API_KEY" \
  -e OPENAI_MODEL="gpt-4o-mini" \
  langgraph-demo

Testing It

Run the script locally first with python app.py to confirm the graph itself works before involving Docker. Then build the container and run it with docker run --rm ... to verify that dependency installation and environment variable injection are correct.

If the container fails immediately, check three things first: the API key is present inside the container, your package versions are compatible, and the base image has enough system libraries for any extra dependencies you added later. For this minimal setup, successful output should be a single natural-language answer printed to stdout.

A good next test is to change only OPENAI_MODEL at runtime and confirm the container still behaves correctly without rebuilding the image.

Next Steps

  • Wrap the graph in a FastAPI service so other systems can call it over HTTP.
  • Add structured outputs with Pydantic so downstream services get typed responses.
  • Move from docker run to Kubernetes or ECS once you need scaling, rollout control, and secrets management.

Keep learning

By Cyprian Aarons, AI Consultant at Topiax.

Want the complete 8-step roadmap?

Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.

Get the Starter Kit

Related Guides