You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Schaufel is supposed to take over some of the jobs our postgres insert trigger used to manage. What follows is a list of places that need touching (of course the real scope of what we implement is up to discussion):
duplication of messages to postgres is done by having a second meta data structure. this ought to be replaced by a refcounter system
transformations done by the exports system should be transformed to filters applyable to the queue. this shall be possible post queue_add or pre queue_get. the aim is to avoid double work within modules
the table name to copy into needs to be determined by the fields in the json message
to transport this tablename we probably require metadata in the queue (this may also be required to forward kafka message headers)
a data structure needs to hold all buffers for copy. this data structure ought to be easy to iterate over, easy to alter, and have a good access time. the insert trigger at the moment has at best O(n) behaviour, so anything faster than O(n) will do. Cache locality would be preferred
these buffers need to be committed periodically and when full
Schaufel is supposed to take over some of the jobs our postgres insert trigger used to manage. What follows is a list of places that need touching (of course the real scope of what we implement is up to discussion):
The text was updated successfully, but these errors were encountered: