-
Notifications
You must be signed in to change notification settings - Fork 6
Move bagger functionality to exports module #81
Copy link
Copy link
Open
Description
Schaufel is supposed to take over some of the jobs our postgres insert trigger used to manage. What follows is a list of places that need touching (of course the real scope of what we implement is up to discussion):
- duplication of messages to postgres is done by having a second meta data structure. this ought to be replaced by a refcounter system
- transformations done by the exports system should be transformed to filters applyable to the queue. this shall be possible post queue_add or pre queue_get. the aim is to avoid double work within modules
- the table name to copy into needs to be determined by the fields in the json message
- to transport this tablename we probably require metadata in the queue (this may also be required to forward kafka message headers)
- a data structure needs to hold all buffers for copy. this data structure ought to be easy to iterate over, easy to alter, and have a good access time. the insert trigger at the moment has at best O(n) behaviour, so anything faster than O(n) will do. Cache locality would be preferred
- these buffers need to be committed periodically and when full
- in case of failure in libpq, data must not be lost. see Postgres modules don't check libpq errors #80
- a jsonb/json binary insert transformation needs to be added to aforementioned filters. other datatypes are nice to have
- values that are dereferenced from json ought to be deletable from its jsonc data structure
- what remains of the original json shall also be inserted into postgres
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels