You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While core.download:download_cache is a great way to speed up the download step in CI workflows, it unfortunately only serves to speed up the download of common dependencies.
In more complex chains where you have several (dependent) jobs on a common CI infrastructure, it's likely that the package a previous job has just created and uploaded to a remote, will also be directly consumed by a dependent job. In this case, any freshly created artifact will always go the full round-trip of upload and download, even with a cache configured.
Ideally, the compressed package would be forked straight into the download_cache (when configured), so that a following download of the new package is reduced to a noop.
Have you read the CONTRIBUTING guide?
I've read the CONTRIBUTING guide
The text was updated successfully, but these errors were encountered:
Thanks for your suggestion.
My first thought is that this would be actually more challenging than expected, as the download (with cache) and upload flows and internal architecture are quite decoupled, and this would require coupling them, and also further messing with the download cache (and cache are always challenging).
But it also seems this could make sense, so putting this for consideration into the 2.X roadmap (not a short term thing, because there will be other priorities (functional) and this optimization while good to have is not a blocker). Thanks for the feedback!
What is your suggestion?
While
core.download:download_cache
is a great way to speed up the download step in CI workflows, it unfortunately only serves to speed up the download of common dependencies.In more complex chains where you have several (dependent) jobs on a common CI infrastructure, it's likely that the package a previous job has just created and uploaded to a remote, will also be directly consumed by a dependent job. In this case, any freshly created artifact will always go the full round-trip of upload and download, even with a cache configured.
Ideally, the compressed package would be forked straight into the
download_cache
(when configured), so that a following download of the new package is reduced to a noop.Have you read the CONTRIBUTING guide?
The text was updated successfully, but these errors were encountered: