Replies: 2 comments 2 replies
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
jhfry
-
I somehow did not see the mention of setting the slow branches to NC.
Maybe it should be a bit more prominent? I think that's the path I will
take. I would rather get a disk full, since that would mean that one of my
cron jobs had stopped or it's so busy that the spinning drives can't keep
up.
Thanks a ton for responding... And for the great solution. I have been
running mergerfs for years with zero issues. Not a new story, I'm sure.
…On Sun, Jan 19, 2025, 1:29 PM trapexit ***@***.***> wrote:
1. Yes, having nested filesystems increases the overhead. It's not
harmful just inefficient.
2. No, mergerfs 3 is not far along. The plan is a list of list with
each sublist having its own policy.
3. Yes, of course using NC is a valid setup. You are merely creating
multiple pools to If you want to spread out creates then you wouldn't use
lfs. You'd use mfs or rand or lus. Any setup is valid... the setup is just
2 pools and moving things between them. I do mention the use of NC and warn
what it means.
https://trapexit.github.io/mergerfs/usage_patterns/#tiered-cache
—
Reply to this email directly, view it on GitHub
<#1400 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AE2K4QOEIKR63VU7DFGQNU32LPVHHAVCNFSM6AAAAABVOOWPZ2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCOBYGMYTMNQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Planning my deployment of a solution using four SSDs as a cache in front of a snapraid array of spinning rust; as described in the tiered cache page of the documentation. What I am stuck on is that none of the create policies available will really result in my SSDs being used evenly.
So I thought, I could just create two pools, one with the SSDs (fast, online) and one with the spinning rust (slow, spun down), then create a pool that nests the two, with the fast pool first and an ff policy. Unfortunately, the only other reference I have seen to nesting (#1034) suggests that it might not be a good idea?
Can anyone explain just how bad an idea it would be, and why? Trapexit mentioned, in the other nesting thread, that he may have a solution in 3.0... anyone know what that might look like and how far away it may be?
Alternatively, can I, in my combined pool, set the slower branches to NC (no-create)... then just use an lfs policy? Since only the four faster branches are writeable, would it balance those four, ignoring all of the slow disks? If this is a valid solution, I would suggest it be added to the tiered cache page; as it fits with this use case perfectly.
If neither of those are viable, my workaround will likely be to add the SSD's to a BTRFS Raid 0 so mergerfs sees them as a single drive. Or I can add them to a striped ZFS pool on the Proxmox server hosting the NAS VM. But both of those have their shortcomings as well (lose 100% on a single disk fail).
Beta Was this translation helpful? Give feedback.
All reactions