You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Platform: What platform / version? Python 3.12 on Red Hat Enterprise Linux Server release 7.9 (Maipo)
Description:
We recently started using the asyncio version of the and noticed we were getting redis timeout errors (redis.exceptions.TimeoutError: Timeout reading from xx.xx.xx.xx:30433) a lot more frequently. The actual time it took redis to respond with data hadn't actualy changed, but some of the requests would just timeout. I was able to narrow the error down to the following:
In the sync version of the code, redis uses the low-level OS socket timeout (driven by the socket_timeout parameter), which controls the timeout for every read operation from the socket. For large responses, since the data is read in chunks, the timeout only applies to each individual read operation.
In the async version of the code, redis explicitly sets a timeout using asyncio.timeout, but wraps it around multiple socket read operations. So, for large responses, even if each individual chunk read operation is done quickly, the total time can exceeds the timeout. To make matters worse, we yield control back to the event loop in between each chunk read, so having lots of concurrent tasks in the loop makes things worse.
Version: 4.6.0
Platform: What platform / version? Python 3.12 on Red Hat Enterprise Linux Server release 7.9 (Maipo)
Description:
We recently started using the asyncio version of the and noticed we were getting redis timeout errors (
redis.exceptions.TimeoutError: Timeout reading from xx.xx.xx.xx:30433
) a lot more frequently. The actual time it took redis to respond with data hadn't actualy changed, but some of the requests would just timeout. I was able to narrow the error down to the following:In the sync version of the code, redis uses the low-level OS socket timeout (driven by the
socket_timeout
parameter), which controls the timeout for every read operation from the socket. For large responses, since the data is read in chunks, the timeout only applies to each individual read operation.In the async version of the code, redis explicitly sets a timeout using
asyncio.timeout
, but wraps it around multiple socket read operations. So, for large responses, even if each individual chunk read operation is done quickly, the total time can exceeds the timeout. To make matters worse, we yield control back to the event loop in between each chunk read, so having lots of concurrent tasks in the loop makes things worse.For example, in
AbstractConnection.read_response
async with async_timeout(read_timeout):
is wrapped aroundself._parser.read_response(
. If you're usingHiredisParser
, thenHiredisParser.read_response
callsHiredisParser.read_from_socket
in a loop, andHiredisParser.read_from_socket
callsself._stream.read
. Meaning the async_timeout applying to the entire response, not to each individual socket read operations.The text was updated successfully, but these errors were encountered: