Skip to content
This repository has been archived by the owner on Jul 3, 2019. It is now read-only.

Predictive 0 Round Trip Time Replies #5

Open
ottopasuuna opened this issue Jul 29, 2017 · 0 comments
Open

Predictive 0 Round Trip Time Replies #5

ottopasuuna opened this issue Jul 29, 2017 · 0 comments

Comments

@ottopasuuna
Copy link
Member

I had an optimization idea regarding the request/reply model (Which we should also figure out how to standardize/wrap):
If a module requests a certain piece of data at a regular interval, the module doing the response could make note of that interval and schedule that data to be sent right before the requester sends the next request. For example;
A requests x from B. B logs the time the request was made, and sends the reply. After a time interval t, A requests x from B. B now knows (or at least thinks that it knows) how often A wants x, sends the reply, but also schedules a timer to send reply for x to A after time (t-2*k) has passed, where k is a tolerance constant common to both A and B. The tolerance value would be needed because the modules could be on different computers, each with different clock skews, so time measurements will always be off by some factor (idealy k), and I multiply by 2 because A could send the request at (t-k) while B sends at (t+k) in which case A would miss the predictive reply.
Once this system is working, it has the effect of A making a call for x and imediately getting a response, eliminating the performance impact that round trip time causes in synchronous request/response communication.
When B sends its reply, it also sends a timestamp with the time that the reply was sent. When A requests x, if a reply is already in its receive queue, A checks the timestamp to make sure it is current enough. If it is, A sends a dummy request that functions as an acknowledge to B to say "Yes, I got your predictive reply, thanks", after which B schedules another predictive reply. If the reply in A's receive queue is too old, A discards it and sends a regular request.
If B schedules a reply but A sends a request before the predictive reply is sent, B cancels the timer and sends the reply imediately, and also updates the predicted interval.
This whole system could be disabled at the reuest of A using a predictive=false parameter to the request call.
Again, the whole point of all this is to address the issue of latency in synchronous request/reply communication. The alternative is to fire off non blocking requests and receive the replies asynchronously, but this is obviously no longer synchronous communication. Some problems are just better suited to synchronous comunication.
This is probably more work than it's worth, so its not important but would be pretty cool.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant