ZFS RAID recovery/replacement test

    • OMV 4.x
    • ZFS RAID recovery/replacement test

      Hello OMV Team,

      First I want to thank you so very much for such a great software! <3

      Second (My question) ^^

      I created an array RAID 1 2x HD 20 GB each using the ZFS filesystem / OMV4 plugin. Everything works nicely, I simulated a faulty drive, and when checking the disks and raid state I could see the status of the RAID1 in ZFS as Degraded. I might be missing something, but can you confirm that the only way to recover the RAID 1 is using the Linux terminal and the commands for ZFS?


      Source Code

      1. # first zpool status
      2. # second zpool offline raid 11813710371933618037 < where the number is the id of the faulty drive
      3. # third zpool replace raid 11813710371933618037 /dev/disk/by-id/ata-VBOX_HARDDISK_VB9e9dbf77-e066cd80 < where the "by-id" is the new healthy drive
      4. # four zpool status < verification
      Is there a way to recover the degraded RAID 1 using the UI? or is this a feature to request? :rolleyes:

      Again, loving the software, learning about it and looking forward to continue working and supporting you guys!

      Sincerely,
    • You are correct, there is not a way through the UI to replace a failed drive as far as I can tell.

      Keep in mind that the folks who created the ZFS plugin were not able to actively maintain it and didn't have tons of coding experience to begin with, so they may not have thought of this. This is one reason why the UI has a few odd display options (specifically, snapshot display).
    • jairusan wrote:

      Really appreciate your confirmation. I assume the same applies to the mdadm RAID?
      No, MDADM is a core part of OMV, so it's done by Volker as part of the project. Personally, I can't speak to whether or not you can do a drive replace via the Raid Management Interface, as I've never used it - I started out running JBOD and leaped straight into ZFS when the plugin first came out.
    • jairusan wrote:

      I assume the same applies to the mdadm RAID?

      In OMV 4, with mdadm RAID, a drive can replaced in the GUI. Still, IMO, I think you're on the right track with ZFS.
      ___________________________________________________________

      Edit: There's a ZFS pool property that can be set called "autoreplace". It might be accessible from the GUI. I'll look at this when I get home in a few days.

      The post was edited 1 time, last by crashtest ().