You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Impact of the new feature
Perform review of existing DBS server and its database with aim to address various scalability issues and address its readiness for future runs
Is your feature request related to a problem? Please describe.
Even though we performed migration of DBS servers from python based to Go based implementation which addressed technical aspect of DBS performance (such as memory, CPU, scalability, deployment to k8s) we should start DBS review process to address various issues with DBS scalability. Among them:
table partitioning
storage of large number of lumis per block/file. Currently we can see MC samples created with 200K lumis and it is unclear how we can use this information. Moreover, the size of FILE_LUMIS table is huge and growing quickly, currently 9B entries.
large blocks and which limits we should impose
etc.
Describe the solution you'd like
We should evaluate data management tasks and which information should be stored into DBS, which information is useful for end-users, etc.
Describe alternatives you've considered
we can still use existing DBS database and continue our data management as is, but with time we start seeing various issues. For instance, insertion of large lumis into very large table will eventually takes longer and longer and we can hit time outs on our frontends.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
Impact of the new feature
Perform review of existing DBS server and its database with aim to address various scalability issues and address its readiness for future runs
Is your feature request related to a problem? Please describe.
Even though we performed migration of DBS servers from python based to Go based implementation which addressed technical aspect of DBS performance (such as memory, CPU, scalability, deployment to k8s) we should start DBS review process to address various issues with DBS scalability. Among them:
table partitioning
storage of large number of lumis per block/file. Currently we can see MC samples created with 200K lumis and it is unclear how we can use this information. Moreover, the size of FILE_LUMIS table is huge and growing quickly, currently 9B entries.
large blocks and which limits we should impose
etc.
Describe the solution you'd like
We should evaluate data management tasks and which information should be stored into DBS, which information is useful for end-users, etc.
Describe alternatives you've considered
we can still use existing DBS database and continue our data management as is, but with time we start seeing various issues. For instance, insertion of large lumis into very large table will eventually takes longer and longer and we can hit time outs on our frontends.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: