-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Description
What happened?
timeout is accepted and propagated through the async aiohttp handler path, but it is not applied on the actual network request.
In litellm/llms/custom_httpx/aiohttp_handler.py:
_make_common_async_call(..., timeout: Union[float, httpx.Timeout], ...)receivestimeoutasync_completion(...)forwardstimeoutinto_make_common_async_call(...)- but inside
_make_common_async_call, the request is made via:
response = await async_client_session.post(
url=api_base,
headers=headers,
json=data,
data=form_data,
)No timeout=... is passed, so caller-provided timeout is silently ignored on this path.
By contrast, sync path _make_common_sync_call(...) does pass timeout=timeout.
Why this matters
This creates async/sync inconsistency and makes timeout ineffective for providers routed through BaseLLMAIOHTTPHandler (for example aiohttp_openai path in litellm/main.py).
Minimal repro idea
Use litellm.acompletion(..., custom_llm_provider="aiohttp_openai", timeout=<very small value>) against a slow endpoint. Expected timeout behavior does not trigger from per-call timeout parameter because it is not forwarded to aiohttp post().
Expected behavior
timeout passed by user should be applied to ClientSession.post(...) in _make_common_async_call.
Suggested fix
In _make_common_async_call, pass timeout explicitly:
response = await async_client_session.post(
url=api_base,
headers=headers,
json=data,
data=form_data,
timeout=timeout,
)Are you a ML Ops Team?
No
What LiteLLM version are you on?
Observed on current main (as of 2026-02-23).
Twitter / LinkedIn details
No response