How to Fix 'async event loop error in production' in LangGraph (Python)
Opening
async event loop error in production in LangGraph usually means you’re trying to run async graph code inside an environment that already has an event loop, or you’re mixing sync and async APIs incorrectly. In practice, this shows up in FastAPI, Jupyter, background workers, or when a graph node calls asyncio.run() from inside an already-running loop.
The common symptoms are errors like:
- •
RuntimeError: asyncio.run() cannot be called from a running event loop - •
RuntimeError: This event loop is already running - •
TypeError: object dict can't be used in 'await' expression - •LangGraph execution hanging after
graph.invoke()orgraph.ainvoke()
The Most Common Cause
The #1 cause is calling asyncio.run() inside code that is already async, or using the sync LangGraph API from async application code.
This is the pattern that breaks in production:
| Broken pattern | Fixed pattern |
|---|---|
Calls asyncio.run() inside a request handler or notebook | Uses await graph.ainvoke(...) |
Mixes graph.invoke() with async nodes | Keeps the whole path async |
| Wraps async node logic in sync wrappers | Makes nodes truly async |
Broken code
import asyncio
from langgraph.graph import StateGraph, START, END
async def fetch_data(state):
return {"result": "ok"}
builder = StateGraph(dict)
builder.add_node("fetch_data", fetch_data)
builder.add_edge(START, "fetch_data")
builder.add_edge("fetch_data", END)
graph = builder.compile()
def handle_request():
# Broken: will fail in async environments
result = asyncio.run(graph.ainvoke({"input": "hi"}))
return result
If this runs inside FastAPI, Celery with async context, or Jupyter, you’ll typically get:
RuntimeError: asyncio.run() cannot be called from a running event loop
Fixed code
from langgraph.graph import StateGraph, START, END
async def fetch_data(state):
return {"result": "ok"}
builder = StateGraph(dict)
builder.add_node("fetch_data", fetch_data)
builder.add_edge(START, "fetch_data")
builder.add_edge("fetch_data", END)
graph = builder.compile()
async def handle_request():
# Correct: stay async end-to-end
result = await graph.ainvoke({"input": "hi"})
return result
If your app framework expects sync handlers, then keep the graph call sync too:
def handle_request_sync():
result = graph.invoke({"input": "hi"})
return result
The rule is simple: don’t cross the sync/async boundary unless you own the boundary.
Other Possible Causes
1) Calling await on a sync node result
If your node is defined as a normal def, LangGraph treats it as synchronous. Awaiting it manually will break.
# Broken
def enrich_state(state):
return {"x": 1}
async def run():
value = await enrich_state({}) # TypeError
Fix:
# Fixed
async def enrich_state(state):
return {"x": 1}
async def run():
value = await enrich_state({})
2) Using a sync client inside an async node that blocks the loop
A classic production issue is calling blocking I/O inside async def nodes.
# Broken
import requests
async def fetch_user(state):
r = requests.get("https://api.example.com/user/123")
return {"user": r.json()}
Fix it with an async client:
# Fixed
import httpx
async def fetch_user(state):
async with httpx.AsyncClient() as client:
r = await client.get("https://api.example.com/user/123")
return {"user": r.json()}
3) Running LangGraph inside FastAPI startup or request code incorrectly
FastAPI endpoints are already running on an event loop. Don’t wrap graph calls with asyncio.run() there.
# Broken
from fastapi import FastAPI
import asyncio
app = FastAPI()
@app.get("/run")
async def run_graph():
return asyncio.run(graph.ainvoke({"input": "hi"}))
Fix:
# Fixed
@app.get("/run")
async def run_graph():
return await graph.ainvoke({"input": "hi"})
4) Nested event loops in notebooks or REPL tooling
Jupyter and IPython often already manage an event loop. If you see:
RuntimeError: This event loop is already running
then don’t call asyncio.run() there.
Use direct awaiting in notebook cells:
result = await graph.ainvoke({"input": "hi"})
result
Or if you must integrate legacy code, isolate it behind a proper async function and call that directly.
How to Debug It
- •
Check whether your caller is sync or async
- •If your function is declared with
async def, useawait graph.ainvoke(...). - •If it’s a normal
def, usegraph.invoke(...).
- •If your function is declared with
- •
Search for nested event-loop calls
- •Grep for:
- •
asyncio.run( - •
loop.run_until_complete( - •
.invoke(inside async handlers where.ainvoke(should be used
- •
- •Grep for:
- •
Inspect every node signature
- •Async nodes must be declared with
async def. - •Sync nodes should not contain awaits.
- •Mixed signatures are fine only if each call path matches its execution model.
- •Async nodes must be declared with
- •
Print the exact stack trace and locate the boundary
- •If the error originates in FastAPI/Starlette/Uvicorn, the problem is usually at the request boundary.
- •If it originates inside a LangGraph node, check whether that node is blocking or incorrectly awaiting sync work.
Typical stack trace clues:
RuntimeError: asyncio.run() cannot be called from a running event loop
File ".../your_service.py", line 42, in run_graph
File ".../langgraph/pregel/main.py", line ...
That points to bad orchestration code, not LangGraph itself.
Prevention
- •
Keep one execution model per path:
- •sync entrypoint →
graph.invoke(...) - •async entrypoint →
await graph.ainvoke(...)
- •sync entrypoint →
- •
Use async-safe dependencies inside async nodes:
- •
httpx.AsyncClient - •async database drivers
- •non-blocking SDK methods when available
- •
- •
Add one integration test per runtime environment:
- •FastAPI request path
- •worker job path
- •notebook/debug path if your team uses one
If you standardize those boundaries early, this class of error stops showing up at deploy time and starts getting caught in local testing where it belongs.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit