How to Fix 'async event loop error during development' in LangGraph (Python)
Opening
If you’re seeing RuntimeError: This event loop is already running or a LangGraph wrapper around it during development, you’re usually calling async code from the wrong place. In practice, this shows up when you run a graph inside Jupyter, FastAPI startup hooks, Streamlit, or another environment that already owns the event loop.
In LangGraph, the failure usually means you mixed await, asyncio.run(), and sync entrypoints like graph.invoke() in the same execution path. The fix is almost always to keep the whole call chain consistently async or consistently sync.
The Most Common Cause
The #1 cause is calling asyncio.run() inside an environment that already has an active loop, then using that to execute a LangGraph async graph.
Typical stack traces look like this:
- •
RuntimeError: This event loop is already running - •
RuntimeError: asyncio.run() cannot be called from a running event loop - •Sometimes wrapped by LangGraph when your node returns a coroutine unexpectedly
Broken vs fixed pattern
| Broken | Fixed |
|---|---|
Calls asyncio.run() from notebook/app code | Uses await inside an async function |
| Mixes sync and async graph APIs | Keeps one execution model end-to-end |
# BROKEN
import asyncio
from langgraph.graph import StateGraph, START, END
class State(dict):
pass
async def fetch_data(state: State):
return {"value": 42}
builder = StateGraph(State)
builder.add_node("fetch_data", fetch_data)
builder.add_edge(START, "fetch_data")
builder.add_edge("fetch_data", END)
graph = builder.compile()
# In Jupyter / FastAPI / any running loop:
result = asyncio.run(graph.ainvoke({}))
print(result)
# FIXED
from langgraph.graph import StateGraph, START, END
class State(dict):
pass
async def fetch_data(state: State):
return {"value": 42}
builder = StateGraph(State)
builder.add_node("fetch_data", fetch_data)
builder.add_edge(START, "fetch_data")
builder.add_edge("fetch_data", END)
graph = builder.compile()
async def main():
result = await graph.ainvoke({})
print(result)
# In notebook:
# await main()
#
# In an async app:
# await main()
If you’re in a plain Python script with no existing loop, asyncio.run(main()) is fine. The problem starts when you call it from somewhere that already runs async code.
Other Possible Causes
1) Using invoke() on an async graph node
If one of your nodes is declared with async def, but you call the synchronous API in a context that expects async behavior, you can get errors like:
- •
TypeError: Object of type coroutine is not JSON serializable - •
RuntimeWarning: coroutine was never awaited
# Wrong
result = graph.invoke({"input": "hello"}) # while nodes are async
# Right
result = await graph.ainvoke({"input": "hello"})
Use invoke() only when your graph and nodes are fully sync.
2) Returning a coroutine instead of awaiting it inside a node
This happens when you forget await in a node function.
# Wrong
async def node(state):
data = some_async_client.get_user(state["user_id"])
return {"user": data} # data is a coroutine
# Right
async def node(state):
data = await some_async_client.get_user(state["user_id"])
return {"user": data}
LangGraph will propagate that bad return value until something downstream breaks.
3) Running LangGraph inside Jupyter without top-level await support
Jupyter already has an active event loop. If you wrap everything in asyncio.run(), you’ll hit the classic error immediately.
# Wrong in notebook cells
import asyncio
asyncio.run(graph.ainvoke({}))
# Right in notebook cells
await graph.ainvoke({})
If your notebook environment doesn’t support top-level await cleanly, move the code into an async function and call it with the notebook’s async-aware tooling.
4) Mixing sync tools with async nodes incorrectly
A common pattern is calling blocking I/O inside an async node and then trying to “fix” it by wrapping more things in event-loop helpers. That creates messy control flow and makes debugging harder.
# Problematic
async def node(state):
response = requests.get("https://api.example.com/data") # blocking call
return {"data": response.json()}
Use an async client:
import httpx
async def node(state):
async with httpx.AsyncClient() as client:
response = await client.get("https://api.example.com/data")
return {"data": response.json()}
Blocking calls won’t always trigger the exact event-loop error, but they often appear in the same debugging session because developers start patching around them incorrectly.
How to Debug It
- •
Check whether your process already has an event loop
- •Notebook, FastAPI, Starlette, Quart, Streamlit, and some test runners already do.
- •If yes, do not use
asyncio.run()there.
- •
Inspect every LangGraph entrypoint
- •Use
graph.invoke()for sync graphs. - •Use
await graph.ainvoke()for async graphs. - •Do not mix them in one call chain.
- •Use
- •
Search for un-awaited coroutines
- •Look for any line like:
- •
client.get(...) - •
some_async_fn(...)
- •
- •If the function is async, it needs
await.
- •Look for any line like:
- •
Print the exact traceback and find the first non-LangGraph frame
- •The real bug is usually above the LangGraph wrapper.
- •Look for:
- •
asyncio.run() - •nested event-loop calls
- •missing
await - •blocking sync libraries inside async nodes
- •
Prevention
- •
Keep one execution model per project path:
- •sync app → sync nodes +
invoke() - •async app → async nodes +
ainvoke()
- •sync app → sync nodes +
- •
In notebooks and ASGI apps:
- •use top-level
await - •avoid
asyncio.run()
- •use top-level
- •
Add tests for both paths if you support both:
def test_sync_graph(): assert graph.invoke({})["value"] == 42 @pytest.mark.asyncio async def test_async_graph(): assert (await graph.ainvoke({}))["value"] == 42
If you want this error gone fast: stop nesting event loops, make every node’s sync/async behavior explicit, and use the matching LangGraph API all the way through.
Keep learning
- •The complete AI Agents Roadmap — my full 8-step breakdown
- •Free: The AI Agent Starter Kit — PDF checklist + starter code
- •Work with me — I build AI for banks and insurance companies
By Cyprian Aarons, AI Consultant at Topiax.
Want the complete 8-step roadmap?
Grab the free AI Agent Starter Kit — architecture templates, compliance checklists, and a 7-email deep-dive course.
Get the Starter Kit