You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Retrying tasks with countdowns can cause these tasks to pile up in the _pending queue. For argument's sake, let's say there are 10k such tasks (all with the same ETA) in a _pending queue but flush_every is only 10. When the ETA is reached and 10 messages are received from the broker, then the list of SimpleRequest objects passed to the task will be of length 10010. I understand that flush_every=10 is not a promise that the batch size won't be something larger like 10010, but without enforcing some kind of maximum on the batch size, it becomes difficult to reason about how long a batch is expected to take, which is important when setting SQS visibility timeouts for example.
The text was updated successfully, but these errors were encountered:
That seems like a bug -- we should probably schedule only an additional flush_every or something like that which have become available due to ETAs. (Taking care to then flush the entire queue which is ready.)
Retrying tasks with countdowns can cause these tasks to pile up in the _pending queue. For argument's sake, let's say there are 10k such tasks (all with the same ETA) in a _pending queue but flush_every is only 10. When the ETA is reached and 10 messages are received from the broker, then the list of SimpleRequest objects passed to the task will be of length 10010. I understand that flush_every=10 is not a promise that the batch size won't be something larger like 10010, but without enforcing some kind of maximum on the batch size, it becomes difficult to reason about how long a batch is expected to take, which is important when setting SQS visibility timeouts for example.
The text was updated successfully, but these errors were encountered: