You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For high-latency stores (sftp: and a lot of what's available via rclone:) a full index rebuild needs to list all objects in the store and there are potentially a lot of objects.
For sftp, this also needs listing all 65536+256 nesting directories and sftp is relatively slow.
IIRC, currently these do a complete rebuild:
compact (because it removes unused/unreferenced objects)
check (repository part, before doing anything else - just to make sure)
first operation after a borg check --repair (because repair might have removed objects and thus invalidated the chunks cache)
Most commands will:
use their locally cached chunks index (if the hash is still the same of what's in repo/cache/chunks_hash)
fetch a fresh index from repo/cache/chunks
rebuild the chunks index the slow way, by listing all objects (then write it to repo/cache/chunks)
So the question is:
When shall we rely on an existing cached ChunkIndex (repo/cache/chunks) being in a good state and when shall we rather go the slow-safe route and build a fresh one?
For borg create:
it's not a big problem if the index does not have all objects that exist in the repo. If that happens, borg create will just store something to the repo that's already there. After it is finished creating the archive, it will store an updated index to repo/cache/chunks.
it would be a severe problem though if the index would falsely say "we have that object" and borg would not store it to the repo. the archive would then reference a non-existing object.
The text was updated successfully, but these errors were encountered:
ThomasWaldmann
changed the title
borg2: which commands shall rebuild a completely new ChunkIndex?
borg2: which commands shall do a complete rebuild of the ChunkIndex?
Oct 15, 2024
For high-latency stores (
sftp:
and a lot of what's available viarclone:
) a full index rebuild needs to list all objects in the store and there are potentially a lot of objects.For sftp, this also needs listing all 65536+256 nesting directories and sftp is relatively slow.
IIRC, currently these do a complete rebuild:
Most commands will:
repo/cache/chunks_hash
)repo/cache/chunks
repo/cache/chunks
)So the question is:
When shall we rely on an existing cached ChunkIndex (
repo/cache/chunks
) being in a good state and when shall we rather go the slow-safe route and build a fresh one?For borg create:
repo/cache/chunks
.The text was updated successfully, but these errors were encountered: