How do I add a second bootable disk? #585
-
I've got a working ZFSBootMenu system set up, but I cannot for the life of me figure out how to get multiple drives bootable. After installing the system to a single disk, I first tried simply copying the partition table from sda to sdb, then copying the files from sda1 (/boot) to sda2. No joy--attempting to copy the files directly fails with REALLY strange errors up to and including "cannot allocate memory." So did workarounds like tarring and untarring or zipping and unzipping. So I gave up on that. Next thing I did was just follow the install instructions for the second drive, but skipping the parts about the pool (which is, after all, already installed) and changing /boot to /boot2. That worked in the sense of getting all the files there and creating EFI boot entries for them, but booting from sdb results in the dreaded "insert bootable media" message from my UEFI firmware. What should the procedure look like? I install a LOT of multiple-drive systems, and I'd like to upgrade to ZBM--but I need to get issues with multiple-BOOT-drive setup fixed first. Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 7 replies
-
I assume your reference to If you want to keep things synchronized between the two, you have a few options. I've always favored Linux mdraid arrays for this task. I create an mdraid mirror with all of the EFI system partitions I'd like to replicated, mother make a common filesystem on the RAID device and populate accordingly. To make this work, the mdraid mirror must be created with Some people recoil at the use of mdraid in this way, because there is a small but nonzero chance that the firmware will write to whatever filesystem it reads during boot. This can corrupt the filesystem. However, the chance of any issues is incredibly small, and the EFI system partition is trivial to recover in the rare case it actually happens. (You can always boot a system from ZBM on a USB thumb drive if you need to recover from a bad EFI system partition.) If you are concerned about this possibility, you will have to treat every disk's EFI system partition as an independent filesystem and manually sync them. An example hook for generate-zbm shows how this can be accomplished automatically whenever a new ZBM image is created, or you can replicate the steps on demand. |
Beta Was this translation helpful? Give feedback.
Not anymore, because I finally got the accursed thing working properly using mdraid. That actually got this motherboard recognizing both drives AS bootable, which it was refusing to do at least half the time, after I used efibootmgr to manually create entries for both drives.
This was what worked, after booting into the system from a portable ZBM thumbdrive.
First, unmount /boot/efi and wipe the boot partitions on both disks:
Then, reconfigure both boot partitions to the Linux RAID automount type, create a new RAID1 mirror across them, and format it for use: