Replies: 10 comments 10 replies
-
The way I see it: this limit seems to have created quite a lot of problems and frustration with the users. We might consider disabling this for now until we have a proper feedback/tool integration for such a limit. The problem we tried to solve would still be there however: users pushing large files in a non-LFS part of the ARC. We already have a few ARCs were that is the case and thus we do need some way to a) detect such ARCs, b) find a way to (ideally automatically) rebase those ARCs so that the files are moved to the LFS storage. But I think, I would really consider disabling the limit for now since both the integration with the tools and the user documentation/teaching isn't quite there yet. |
Beta Was this translation helpful? Give feedback.
-
I just had a similar discussion with @ZimmerD, I think this highlights that we need a way to correctly distribute information like this inside the consortium. This change should have been first discussed with (or announced to) the devs of the tools so they can implement the threshold, while giving data stewards the info and potential release window so they can prepare users/facilities in advance. |
Beta Was this translation helpful? Give feedback.
-
Ok, there have been quite some user requests this week. I'd appreciate, if we could unset the limit for a while. Hoping that we can clean up properly afterwards. |
Beta Was this translation helpful? Give feedback.
-
Ok, now that it's deactivated, it turns out that this was not the (only) issue. Some users also report this or a similar error, which is apparently a network issue?
|
Beta Was this translation helpful? Give feedback.
-
While for some users, the LFS limit was a "correct" error - some ARCs were now again uploaded with large files not in LFS. For reasons I cannot reproduce. I assume there's been a mix of manual work on files + ARC commander + pure git + ARCitect. |
Beta Was this translation helpful? Give feedback.
-
We had a call about this topic, and here are the main results:
Related ARCitect issue: nfdi4plants/ARCitect#264 Related ARCCommander issue: nfdi4plants/ARCCommander#245 I think both tools should try to align on the functionality. |
Beta Was this translation helpful? Give feedback.
-
Just for clarity, @Brilator: You're writing |
Beta Was this translation helpful? Give feedback.
-
As for solutions: Is it possible to check whether a file is tracked by LFS in script? If so, we could include this as a validation case for the specification validation. By this the users would not be hard-blocked but encouraged to e.g. ask a Data Steward for help. |
Beta Was this translation helpful? Give feedback.
-
The original issue has since been "solved" or at least understood see #6. |
Beta Was this translation helpful? Give feedback.
-
Not sure if this is still relevant, but I have multiple scripts for tracking files according to my needs. For size it is this one:
Maybe this or something like this could be helpful? I also have a script to track everything in runs based on file endings (because I don't want my yaml files to be tracked) |
Beta Was this translation helpful? Give feedback.
-
Currently a lot of users run into issues due to the DataHUB lfs threshold.
This issue pops up repeatedly and requires discussion on multiple tools (ARC Commander, ARCitect, e.g.) and services (DataHUB) and possibly a discussion on what a useful LFS threshold would be. Hence the higher-level discussion here.
nfdi4plants/ARCitect#264
@SabrinaZander @Hannah-Doerpholz @j-bauer @JonasLukasczyk @TetraW @HLWeil
Beta Was this translation helpful? Give feedback.
All reactions