You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 3, 2019. It is now read-only.
I had an optimization idea regarding the request/reply model (Which we should also figure out how to standardize/wrap):
If a module requests a certain piece of data at a regular interval, the module doing the response could make note of that interval and schedule that data to be sent right before the requester sends the next request. For example; A requests x from B. B logs the time the request was made, and sends the reply. After a time interval t, A requests x from B. B now knows (or at least thinks that it knows) how often A wants x, sends the reply, but also schedules a timer to send reply for x to A after time (t-2*k) has passed, where k is a tolerance constant common to both A and B. The tolerance value would be needed because the modules could be on different computers, each with different clock skews, so time measurements will always be off by some factor (idealy k), and I multiply by 2 because A could send the request at (t-k) while B sends at (t+k) in which case A would miss the predictive reply.
Once this system is working, it has the effect of A making a call for x and imediately getting a response, eliminating the performance impact that round trip time causes in synchronous request/response communication.
When B sends its reply, it also sends a timestamp with the time that the reply was sent. When A requests x, if a reply is already in its receive queue, A checks the timestamp to make sure it is current enough. If it is, A sends a dummy request that functions as an acknowledge to B to say "Yes, I got your predictive reply, thanks", after which B schedules another predictive reply. If the reply in A's receive queue is too old, A discards it and sends a regular request.
If B schedules a reply but A sends a request before the predictive reply is sent, B cancels the timer and sends the reply imediately, and also updates the predicted interval.
This whole system could be disabled at the reuest of A using a predictive=false parameter to the request call.
Again, the whole point of all this is to address the issue of latency in synchronous request/reply communication. The alternative is to fire off non blocking requests and receive the replies asynchronously, but this is obviously no longer synchronous communication. Some problems are just better suited to synchronous comunication.
This is probably more work than it's worth, so its not important but would be pretty cool.
The text was updated successfully, but these errors were encountered:
I had an optimization idea regarding the request/reply model (Which we should also figure out how to standardize/wrap):
If a module requests a certain piece of data at a regular interval, the module doing the response could make note of that interval and schedule that data to be sent right before the requester sends the next request. For example;
A
requestsx
fromB
.B
logs the time the request was made, and sends the reply. After a time intervalt
,A
requestsx
fromB
.B
now knows (or at least thinks that it knows) how oftenA
wantsx
, sends the reply, but also schedules a timer to send reply forx
toA
after time(t-2*k)
has passed, wherek
is a tolerance constant common to bothA
andB
. The tolerance value would be needed because the modules could be on different computers, each with different clock skews, so time measurements will always be off by some factor (idealyk
), and I multiply by 2 becauseA
could send the request at(t-k)
whileB
sends at(t+k)
in which caseA
would miss the predictive reply.Once this system is working, it has the effect of
A
making a call forx
and imediately getting a response, eliminating the performance impact that round trip time causes in synchronous request/response communication.When
B
sends its reply, it also sends a timestamp with the time that the reply was sent. WhenA
requestsx
, if a reply is already in its receive queue,A
checks the timestamp to make sure it is current enough. If it is,A
sends a dummy request that functions as an acknowledge toB
to say "Yes, I got your predictive reply, thanks", after whichB
schedules another predictive reply. If the reply inA
's receive queue is too old,A
discards it and sends a regular request.If
B
schedules a reply butA
sends a request before the predictive reply is sent,B
cancels the timer and sends the reply imediately, and also updates the predicted interval.This whole system could be disabled at the reuest of
A
using apredictive=false
parameter to the request call.Again, the whole point of all this is to address the issue of latency in synchronous request/reply communication. The alternative is to fire off non blocking requests and receive the replies asynchronously, but this is obviously no longer synchronous communication. Some problems are just better suited to synchronous comunication.
This is probably more work than it's worth, so its not important but would be pretty cool.
The text was updated successfully, but these errors were encountered: