Solved? OMV and software raid 5

    • I tried to create a Zmirror (since I have two hard drive), I used the same exact settings in the first page, including

      After clicking save I recieved this error:

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-path | grep 'sdc$'' with exit code '1':
      2. DETAIL:
      3. Error #0:
      4. exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-path | grep 'sdc$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175
      5. Stack trace:
      6. #0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)
      7. #1 /usr/share/omvzfs/Utils.php(120): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)
      8. #2 /usr/share/openmediavault/engined/rpc/zfs.inc(123): OMVModuleZFSUtil::getDiskPath('/dev/sdc')
      9. #3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)
      10. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)
      11. #5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)
      12. #6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)
      13. #7 {main}
      Display All
      What did I do wrong? Should I Wipe the hard driver from gparted, creating even a new gpt table?
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Blabla wrote:

      Should I Wipe the hard driver from gparted, creating even a new gpt table?
      Hi @Blabla

      normally it is not necessary to create a gpt table manually. I created my pool out of disks which were simply (quick) wiped. Therefore try wiping the disks in OMV and then try to create the pool with the ZFS plugin with no steps inbetween.

      You can do this also bei CLi:
      zpool create -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%

      If you get an error message try:
      zpool create -f -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%

      "ata_WDC1_%no%" and "ata_WDC2_%no%" must be replaced by your disk-ids which can be figured out by ls -l /dev/disk/by-id/*
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • cabrio_leo wrote:

      Blabla wrote:

      Should I Wipe the hard driver from gparted, creating even a new gpt table?
      Hi @Blabla
      normally it is not necessary to create a gpt table manually. I created my pool out of disks which were simply (quick) wiped. Therefore try wiping the disks in OMV and then try to create the pool with the ZFS plugin with no steps inbetween.

      You can do this also bei CLi:
      zpool create -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%

      If you get an error message try:
      zpool create -f -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%

      "ata_WDC1_%no%" and "ata_WDC2_%no%" must be replaced by your disk-ids which can be figured out by ls -l /dev/disk/by-id/*
      thanks a lot for the answer!
      The your_pool_name should be /ZFS/my_name (with mountpoint) or my_name (without mountpoint?
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • The zpool create command requests the pool name without mount point. E.g if your pool should get the name "mypool" then the command is zpool create -o ashift=12 mypool .... The pool is then mounted to /mypool.

      I have never tried it to mount the pool to a different mount point. But the ZFS cheat sheet says that the command must by modified:

      zpool create -o ashift=12 -m /ZFS your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • great| it worked and now I have a ZFS mirror :D
      I didn't activate the compression since it will contain only media files that are already compressed.

      Also I'm not sure about the jobs, I read that it should already have a default jobso I don't need to create an other one right?
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • not sure what happened, after something like 20 minutes I reboot my NAS.
      After that it couldn't boot anymore and it was stuck during the intramfs load.
      Here's a screenshot:


      After 2/3 reboot it returned working, and the zmirror is still here.
      Should I check if the zmirror is fine? If yes, how?
      I didn't put any file on it yet
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • flmaxey wrote:

      That particular quote in black text above, (excerpt - no fault tolerance at the pool level), is not my own
      IMO there's really no need to discuss how zpools work since they work as they're designed. I was answering to your conclusions/assumptions before:

      flmaxey wrote:

      if a vdev is lost, the chance of recovering the pool or any of its' data is nearly non-existant. By extension, adding additional vdev's to the pool increases risk
      If a vdev is lost then the pool is lost and there's no need to think about 'recovering' but restoring the latest backup now. About 'increased risk' when adding more vdevs, yes basically true and that's why you want to use redundancy at the vdev layer to prevent a vdev failing (be it RAIDZ or zmirrors, even with the latter you can throw some more disks on it if you want to survive more than one disk failing in a zmirror). But this 'risk' only affects availability and when a vdev is gone... you restore from backup.

      So as long as you have a backup (which those 'I added $some redundancy, what could go wrong now?!' users do not have usually) and you tested your backup whether a restore really works especially in an acceptable timeframe (which almost no one does) there's no 'risk' involved other than less availability (let's call it downtime) even if you have a pool with vdevs implementing no redundancy at all.

      IMO the real problem is (especially in the context of this thread): OMV users want some sort of data protection and are even willing to spend some money and efforts on the problem. What they end up with is not data protection but availability which in some rare cases also provides data protection (with the partiy RAID implementations even some sort of data integrity).

      But instead of going the RAID route backup would be the way to go. For 100% of productive data you need ~125% backup space for a reasonable retention time and a backup concept (which includes the actual implementation and regular testing). The additional %25 storage capacity are for keeping versions so even in a worst case scenario (ransomware eating all your data or you having screwed up your master thesis 2 months ago deleting 20 pages by accident not realizing this back then) you have your data still save.


      Blabla wrote:

      Should I check if the zmirror is fine?
      Sure. You added complexity so now it's up to you to test. Not only once (before you put productive data on your new storage implentation) but regularly. If you are not willing to test whether the redundancy you use now works as it should then you clearly don't need this redundant implementation anyway.
    • tkaiser wrote:

      flmaxey wrote:

      That particular quote in black text above, (excerpt - no fault tolerance at the pool level), is not my own
      IMO there's really no need to discuss how zpools work since they work as they're designed. I was answering to your conclusions/assumptions before:

      flmaxey wrote:

      if a vdev is lost, the chance of recovering the pool or any of its' data is nearly non-existant. By extension, adding additional vdev's to the pool increases risk
      If a vdev is lost then the pool is lost and there's no need to think about 'recovering' but restoring the latest backup now. About 'increased risk' when adding more vdevs, yes basically true and that's why you want to use redundancy at the vdev layer to prevent a vdev failing (be it RAIDZ or zmirrors, even with the latter you can throw some more disks on it if you want to survive more than one disk failing in a zmirror). But this 'risk' only affects availability and when a vdev is gone... you restore from backup.
      So as long as you have a backup ......... /-----/
      Other than your finer points regarding backup in your post (duly noted):

      The main point behind this thread (lose a vdev, lose the pool) remains the same. That assertion is externally referenced, peer reviewed, and it supports the substance of the remainder of the thread which mentioned, at the end, "with solid backup that you trust, the pool risk is no big deal".

      There's little point in rehashing this.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.95 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.95 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey: e ().

    • question: how should I check how much it will take to completion when I run the scrub from the OMV interface?
      I just finished to copy 1TB of data on y ZFS mirror, I did a last scrub, then I'll reboot my NAS and check that everything is ok
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • Blabla wrote:

      question: how should I check how much it will take to completion when I run the scrub from the OMV interface?
      I do not know a possibility in the OMV WebUI. Maybe you can see this in some diagnostics page.

      Again the CLI is your friend: zpool status your_pool reports the scrub progress and gives an estimation how long it will need to finish.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • Blabla wrote:

      question: how should I check how much it will take to completion when I run the scrub from the OMV interface?
      I just finished to copy 1TB of data on y ZFS mirror, I did a last scrub, then I'll reboot my NAS and check that everything is ok
      In the WEB GUI, in the ZFS plugin, Click on your ZFS pool line:

      Then click on Details, on the far right. While there's more information below (options and other), the popup window will display the equivalent of the zpool status poolname command.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.95 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.95 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • cabrio_leo wrote:

      I do not know a possibility in the OMV WebUI. Maybe you can see this in some diagnostics page.
      ;( Sorry, that was wrong.

      flmaxey wrote:

      Then click on Details, on the far right. While there's more information below (options and other), the popup window will display the equivalent of the zpool status poolname command.
      Thank you @flmaxey for your amendment.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • cabrio_leo wrote:

      flmaxey wrote:

      Then click on Details, on the far right. While there's more information below (options and other), the popup window will display the equivalent of the zpool status poolname command.
      Thank you @flmaxey for your amendment.
      It's all good @cabrio-leo. :)

      When it comes to ZFS, I'm learning from you. :thumbsup:
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.95 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.95 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119