Skip to content

fix: restore method/params in WS send-caching dummy request (#3823)#3825

Open
saheersk wants to merge 2 commits intoethereum:mainfrom
saheersk:fix/3823
Open

fix: restore method/params in WS send-caching dummy request (#3823)#3825
saheersk wants to merge 2 commits intoethereum:mainfrom
saheersk:fix/3823

Conversation

@saheersk
Copy link
Copy Markdown

@saheersk saheersk commented Apr 11, 2026

Title:
fix: restore method/params in WS send-caching dummy request (#3823)


Body:

When async_handle_send_caching detects a cache hit it returns a dummy
RPCRequest (id=-1) to skip the network send. Previously the dummy used
method="" and params=[], so async_handle_recv_caching could not compute
the correct cache key and fell through to _get_response_for_request_id(-1),
causing a TimeExhausted error on every cached WS request.

Fix: preserve the original method and params in the dummy request so the
recv-caching decorator can look up and return the cached response.

Also move import threading in cache_request_information to module level.

Add three tests covering the persistent-connection (WebSocket) caching
path: end-to-end socket_request caching, dummy request preservation,
and recv-caching round-trip.

What was wrong?

Related to Issue #3823
Closes #3823

For PersistentConnectionProvider (WebSocket/AsyncIPC), cacheable RPC
requests (e.g. eth_chainId) were never served from cache — every call
hit the network.

Root cause: async_handle_send_caching returns a dummy RPCRequest
(id=-1) when it detects a cache hit, to prevent the request from being
sent over the wire. The dummy used method="" and params=[], so when
async_handle_recv_caching received it, is_cacheable_request returned
False (empty method is not on the allowlist). The decorator then called
_get_response_for_request_id(-1), which polled until TimeExhausted.

A secondary contributing bug: the cache key in cache_request_information
used a different format from the one used by the caching decorators
(missing the thread-ID prefix), so the early-exit guard that prevents
duplicate request-info entries never fired.

How was it fixed?

  • async_handle_send_caching now preserves the original method and params
    in the dummy request ({"id": -1, "method": method, "params": params}),
    so async_handle_recv_caching can compute the correct cache key and
    return the cached response without touching the network.
  • import threading moved from inside cache_request_information to module
    level in request_processor.py.
  • Three new tests added for the persistent-connection caching path:
    end-to-end socket_request cache hit, dummy request field preservation,
    and full send->recv cached round-trip.

Todo:

Cute Animal Picture

A dog patiently waiting for a cached response

…#3823)

  When async_handle_send_caching detects a cache hit it returns a dummy
  RPCRequest (id=-1) to skip the network send. Previously the dummy used
  method="" and params=[], so async_handle_recv_caching could not compute
  the correct cache key and fell through to _get_response_for_request_id(-1),
  causing a TimeExhausted error on every cached WS request.

  Fix: preserve the original method and params in the dummy request so the
  recv-caching decorator can look up and return the cached response.

  Also move `import threading` in cache_request_information to module level.

  Add three tests covering the persistent-connection (WebSocket) caching
  path: end-to-end socket_request caching, dummy request preservation,
  and recv-caching round-trip.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Caching is not working for WebSocketProvider

1 participant