You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It’s a weakness of the current outbound code that the email is split by domain into individual queue files. If the mail is large and to a mailing list (eg 1MB email with 100 recipients) you do a LOT of I/O.
This can actually cause a catastrophic failure: the mail starts to get written, the queue_outbound hook times out and the sender gets a temp fail. But the outbound work has started so you get huge duplicates.
Options:
write the mail once and use locking to update the todo part to remove “done” recipients.
write a separate todo file like qmail does (qmail writes 3 files - we should research what they are)
use a local storage db (level db?) instead which can take care of the nitty gritty details. Wild duck uses Mongo but I don’t want to go that far.
Furthermore we need to deal with that timeout sanely by nuking the outbound files (remote end could timeout so we need to cope there too).
The text was updated successfully, but these errors were encountered:
I like Mongo but I wouldn’t use it just for hr todo data, it’s really heavy for such a purpose. We already have [optional] dependencies on SQLite and redis and either would be abundantly adequate for todo locking.
smf pointed this out on irc.
It’s a weakness of the current outbound code that the email is split by domain into individual queue files. If the mail is large and to a mailing list (eg 1MB email with 100 recipients) you do a LOT of I/O.
This can actually cause a catastrophic failure: the mail starts to get written, the queue_outbound hook times out and the sender gets a temp fail. But the outbound work has started so you get huge duplicates.
Options:
Furthermore we need to deal with that timeout sanely by nuking the outbound files (remote end could timeout so we need to cope there too).
The text was updated successfully, but these errors were encountered: