Heuristic:Arize ai Phoenix Notebook Event Loop Patching
| Knowledge Sources | |
|---|---|
| Domains | Debugging, Evaluation |
| Last Updated | 2026-02-14 06:00 GMT |
Overview
Jupyter notebook users must apply `nest_asyncio.apply()` to enable asynchronous evaluation execution, avoiding a significant performance degradation.
Description
Jupyter notebooks run their own asyncio event loop, which prevents Phoenix from creating a nested event loop for async evaluations. Without patching, the executor falls back to synchronous execution (processing one evaluation at a time instead of concurrently), which can be dramatically slower for large evaluation batches.
The `get_executor_on_sync_context` function detects whether a running event loop exists and checks if `nest_asyncio` has been applied (via the `asyncio._nest_patched` flag). If a loop exists but is not patched, it logs a warning and falls back to synchronous execution.
Additionally, async evaluation execution is not supported in non-main threads. If running in a non-main thread, the system always falls back to synchronous execution regardless of `nest_asyncio`.
Usage
Apply this heuristic when:
- Running evaluations or experiments in Jupyter notebooks
- Seeing the warning about "patching the event loop with nest_asyncio"
- Noticing evaluations running much slower than expected in a notebook context
The Insight (Rule of Thumb)
- Action: Add two lines at the top of your notebook before running any Phoenix evaluations:
import nest_asyncio
nest_asyncio.apply()
- Value: Enables concurrent async execution of LLM calls. With 3 concurrent workers, evaluations run approximately 3x faster.
- Trade-off: `nest_asyncio` monkey-patches the asyncio event loop. This is generally safe for notebook use but can cause subtle issues with other async libraries.
Reasoning
The executor selection logic in `executors.py:601-634` makes the decision:
if _running_event_loop_exists():
if getattr(asyncio, "_nest_patched", False):
return AsyncExecutor(...) # Fast: concurrent execution
else:
logger.warning(
"đ!! If running inside a notebook, patching the event loop with "
"nest_asyncio will allow asynchronous eval submission, and is significantly "
"faster. To patch the event loop, run `nest_asyncio.apply()`."
)
return SyncExecutor(...) # Slow: sequential execution
else:
return AsyncExecutor(...) # Fast: no conflict
The warning emoji (đ) emphasizes the significant performance impact. Without patching, each LLM API call waits for the previous one to complete, turning a concurrent operation into a sequential one.
The thread check in `executors.py:576-590`:
if threading.current_thread() is not threading.main_thread():
if run_sync is False:
logger.warning(
"Async evals execution is not supported in non-main threads. Falling back to sync."
)
return SyncExecutor(...)