Replies: 3 comments 1 reply
-
No need to answer, stratis is too immature to use in production, no plan to deploy again. The pool somehow became readable again after several reboots albeit dmesg is flooded with
Stratis is a step back from LVM2 in my opinion, very disappointed. It's 4 years from 1.0 release and nothing got improved. D-Bus does not matter at all to end user. |
Beta Was this translation helpful? Give feedback.
-
Hello, If you are still able, would it be possible to show the metadata for the block devices used by the pool? This would be the first step toward diagnosing the pool's issue. You can use the If you're not able to build the stratis-dumpmetadata program, there's an alternative way to show the block device metadata, via the
For the output of each command, you should see output that looks similar to this:
|
Beta Was this translation helpful? Give feedback.
-
I am curious about the expected behavior in this case. Once a blockdev is added to the pool, is the expectation that the blockdev must stay available in order for the pool to be useable? I've read the following related discussions to this topic: Reviewing the OP's configuration:
Since the data is written sequentially and not-striped, my initial assumption is that the data on the first disk should still be available since the files were fully written to a single disk. In the OP's case, For example, would he have been able to mount the underlying XFS partition directly in read-only mode and copy out his data? This assumes that the "first device" as written in stratis-storage/stratisd#2570 means the first device added into the pool in chronological order, and not some other method of determine "first" whereby And if his Essentially what I'm getting at here is this question: If I were to create a stratis pool with N number of blockdevs, where each blockdev is a physical hard disk, will 1 disk failure render the entire pool unusable? I understand much of these are "what if?" scenarios, but I am interested in learning the failure cases to best configure a stratis pool for my use case and my acceptable trade offs. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi team. I have a pool with 2 blockdevs.
sdd
was used to create the pool and initialized about 3TB of data, then addedsda1
.No snapshot created ever.
Now, the problem is
sda1
has failed.However, stratis reports nothing from this disk failure, no alerts issued.
The FS is inaccessible,
ls
command timeouts. If attempt to snapshot, following error was returned:According to stratis-storage/stratisd#2570, current stratis data allocation strategy is similar to LVM, which data will be written to next disk(blockdev) after current disk is full. In this case,
sdd
is the "current" blockdev in pool.It seems that stratis is not detecting the error from the failing spare disk (in this case
sda1
which I guess it was never being utilized.)Please test if this issue is reproducible.
If so, is there a option to grab the data out from the pool? The
sdd
one is healthy.Thank you!
Some extra info:
Beta Was this translation helpful? Give feedback.
All reactions