-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggregation of events from same transaction. #32
Comments
@cend-gh thanks for the report! What do you suggest? |
Could you please describe what are you going to achieve from a client perspective? |
As far as I know there is no indication in etcd RPC responses that changes somehow belong to the specific transaction, but you always can track revision changes. |
Have to generate config files for a downstream program. The "sender" put data into ETCD, the "receiver" get signalled about this changes via watches. For some KVs its possible to generate the config without dependencies, for some other KVs they must be processed as a set, otherwise the config is incomplete.
After receiving an event, it is not clear whether more events are to be expected or not. The receiver does not know how many KVs are in a transaction or have the same mod_revision. Non acceptable solution: Wait some time and when nothing else happens or new events have a different mod_revision (last+1), interpret it as "done" and start generating configs. From ETCD docs:
From this i assume, that KVs from same transaction are not split over several WatchResponses. To avoid break existing user code, a solution might be:
Now, you get all events from same transaction (+ maybe more KV changes with different mod_revision), one can start immediately processing the changes (my usecase). |
I think the easiest thing we could do is to annotate distinct KVs from watch responses with an unique auto incrementing IDs to distinguish watch events, this way all the events from single watch response will be annotated with the same ID. I'm not aware of etcd internals in respect to how it handles transactional events with huge number of KVs, though. |
not sure if i understand you, The client code can already distinguish if events belong together via the KeyValue.mod_revision. The only thing what is currently missing is to know when all Events (of a specific watch) with the same mod_revision are received .
Its a TCP stream, so size is not a problem. The amount of events in a response is also not limited.
i locally patched it in a simple way (but this breaks client code: 'async for event in watch' vs. async for events in watch'):
|
I've already mentioned revision in one of my previous replies and you're right the revision itself is not enough to distinguish the last event from watch response, that is why we can somehow annotate it to overcome a wait and a sort of manual tracking if there are no more subsequent watch responses. How to implement the API is an open question and PRs are welcome.
|
I did a test and packed a large number of KV changes in one transaction, then i got: so it seems, its no handled at all/wrong usage, with other words the normal case is that all KV changes must fit into a gRPC response regarding your annotation suggestion, a new
its not perfect, ''is_last' is a bit irritating for normal use, because it has only a reasonable meaning in the context of changes based on transactions |
Agree with you about |
If a transaction is made, e.g. put 3 key changes at once, then its not possible via a watcher to get this changes at once. Therefore, it is not possible to determine when related changes have been fully received.
Each event of the same response is processed by its own, a watcher is not able to detect the end of a transaction this way.
watcher.py:
The text was updated successfully, but these errors were encountered: