LangGraph Tutorial (Python): deploying with Docker for advanced developers
This tutorial shows you how to package a LangGraph Python app into a Docker image, run it locally, and expose it in a way that fits real deployment workflows. You need this when your graph is no longer a notebook experiment and you want repeatable builds, environment isolation, and a container you can ship to staging or production.
What You'll Need
- •Python 3.11+
- •Docker Engine installed locally
- •A LangGraph project with:
- •
langgraph - •
langchain-openai - •
langchain-core - •
fastapi - •
uvicorn[standard]
- •
- •An OpenAI API key exported as
OPENAI_API_KEY - •A basic understanding of:
- •
StateGraph - •nodes, edges, and conditional routing
- •environment variables in Python
- •
- •Optional but useful:
- •
python-dotenvfor local development - •
docker composefor multi-service setups
- •
Step-by-Step
- •Start with a small but real LangGraph app. This example uses a typed state, one LLM node, and one formatter node. Keep the graph logic in plain Python so the same code runs locally and inside Docker.
from typing import Annotated, TypedDict
from langchain_core.messages import AIMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def assistant(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(State)
graph_builder.add_node("assistant", assistant)
graph_builder.add_edge(START, "assistant")
graph_builder.add_edge("assistant", END)
app = graph_builder.compile()
if __name__ == "__main__":
result = app.invoke({"messages": [HumanMessage(content="Explain Docker for LangGraph")]})
print(result["messages"][-1].content)
- •Add a minimal API layer so Docker has something useful to run. In production, you usually want an HTTP boundary around the graph rather than invoking it from a script entrypoint.
from typing import Annotated, TypedDict
from fastapi import FastAPI
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def assistant(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(State)
graph_builder.add_node("assistant", assistant)
graph_builder.add_edge(START, "assistant")
graph_builder.add_edge("assistant", END)
app_graph = graph_builder.compile()
api = FastAPI()
@api.post("/chat")
def chat(payload: dict):
user_text = payload["message"]
result = app_graph.invoke({"messages": [HumanMessage(content=user_text)]})
return {"reply": result["messages"][-1].content}
- •Pin dependencies and keep the image small. For advanced deployments, avoid copying your whole working directory blindly; only install what the app needs and let Docker cache dependency layers properly.
# requirements.txt
fastapi==0.115.0
uvicorn[standard]==0.30.6
langgraph==0.2.39
langchain-openai==0.1.25
langchain-core==0.2.43
FROM python:3.11-slim
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY main.py .
EXPOSE 8000
CMD ["uvicorn", "main:api", "--host", "0.0.0.0", "--port", "8000"]
- •Build and run the container with your API key injected at runtime. Do not bake secrets into the image; pass them through environment variables so the same artifact can move across environments.
docker build -t langgraph-docker-demo .
docker run --rm \
-p 8000:8000 \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
langgraph-docker-demo
- •If you want local developer ergonomics, use Compose so the runtime configuration lives next to your codebase. This is where you add ports, env files, health checks, and eventually more services like Redis or Postgres if your graph needs persistence.
services:
api:
build: .
ports:
- "8000:8000"
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
- •Keep one entrypoint for both local and containerized execution by separating app code from startup commands. That makes it easier to debug failures because the graph logic stays identical whether Uvicorn is started manually or by Docker.
export OPENAI_API_KEY="your-key-here"
python main.py &
curl -X POST http://127.0.0.1:8000/chat \
-H "Content-Type: application/json" \
-d '{"message":"Give me one sentence on LangGraph deployment"}'
Testing It
Hit the /chat endpoint with a simple JSON payload and confirm you get a model-generated reply back instead of a stack trace or empty response. Then inspect the container logs to verify Uvicorn started on 0.0.0.0:8000, which is required for port mapping to work outside the container.
If the request fails, check three things first: OPENAI_API_KEY is present in the container environment, the image contains the right package versions, and your code imports ChatOpenAI from langchain_openai. For more serious debugging, shell into the container with docker exec -it <container_id> sh and verify the files are where you expect them.
Next Steps
- •Add streaming responses with LangGraph events instead of returning only the final message.
- •Persist state with Redis or Postgres so multi-turn conversations survive container restarts.
- •Split your graph into separate nodes for retrieval, tool use, and policy checks before shipping to production
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit