Solved? OMV and software raid 5

  • I tried to create a Zmirror (since I have two hard drive), I used the same exact settings in the first page, including


    After clicking save I recieved this error:


    What did I do wrong? Should I Wipe the hard driver from gparted, creating even a new gpt table?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Should I Wipe the hard driver from gparted, creating even a new gpt table?

    Hi @Blabla


    normally it is not necessary to create a gpt table manually. I created my pool out of disks which were simply (quick) wiped. Therefore try wiping the disks in OMV and then try to create the pool with the ZFS plugin with no steps inbetween.


    You can do this also bei CLi:
    zpool create -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%


    If you get an error message try:
    zpool create -f -o ashift=12 your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%


    "ata_WDC1_%no%" and "ata_WDC2_%no%" must be replaced by your disk-ids which can be figured out by ls -l /dev/disk/by-id/*

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • thanks a lot for the answer!
    The your_pool_name should be /ZFS/my_name (with mountpoint) or my_name (without mountpoint?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • The zpool create command requests the pool name without mount point. E.g if your pool should get the name "mypool" then the command is zpool create -o ashift=12 mypool .... The pool is then mounted to /mypool.


    I have never tried it to mount the pool to a different mount point. But the ZFS cheat sheet says that the command must by modified:


    zpool create -o ashift=12 -m /ZFS your_pool_name mirror /dev/disk/by-id/ata_WDC1_%no% /dev/disk/by-id/ata_WDC2_%no%

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • great| it worked and now I have a ZFS mirror :D
    I didn't activate the compression since it will contain only media files that are already compressed.


    Also I'm not sure about the jobs, I read that it should already have a default jobso I don't need to create an other one right?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Did you mean the scrub job? If yes you can check here


    Manual scrubs can also be started by the ZFS plugin.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • not sure what happened, after something like 20 minutes I reboot my NAS.
    After that it couldn't boot anymore and it was stuck during the intramfs load.
    Here's a screenshot:


    After 2/3 reboot it returned working, and the zmirror is still here.
    Should I check if the zmirror is fine? If yes, how?
    I didn't put any file on it yet

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Should I check if the zmirror is fine? If yes, how?

    It should not be necessary. If you want: zpool status => More commands do you find in the ZFS cheat sheet.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • That particular quote in black text above, (excerpt - no fault tolerance at the pool level), is not my own

    IMO there's really no need to discuss how zpools work since they work as they're designed. I was answering to your conclusions/assumptions before:

    if a vdev is lost, the chance of recovering the pool or any of its' data is nearly non-existant. By extension, adding additional vdev's to the pool increases risk

    If a vdev is lost then the pool is lost and there's no need to think about 'recovering' but restoring the latest backup now. About 'increased risk' when adding more vdevs, yes basically true and that's why you want to use redundancy at the vdev layer to prevent a vdev failing (be it RAIDZ or zmirrors, even with the latter you can throw some more disks on it if you want to survive more than one disk failing in a zmirror). But this 'risk' only affects availability and when a vdev is gone... you restore from backup.


    So as long as you have a backup (which those 'I added $some redundancy, what could go wrong now?!' users do not have usually) and you tested your backup whether a restore really works especially in an acceptable timeframe (which almost no one does) there's no 'risk' involved other than less availability (let's call it downtime) even if you have a pool with vdevs implementing no redundancy at all.


    IMO the real problem is (especially in the context of this thread): OMV users want some sort of data protection and are even willing to spend some money and efforts on the problem. What they end up with is not data protection but availability which in some rare cases also provides data protection (with the partiy RAID implementations even some sort of data integrity).


    But instead of going the RAID route backup would be the way to go. For 100% of productive data you need ~125% backup space for a reasonable retention time and a backup concept (which includes the actual implementation and regular testing). The additional %25 storage capacity are for keeping versions so even in a worst case scenario (ransomware eating all your data or you having screwed up your master thesis 2 months ago deleting 20 pages by accident not realizing this back then) you have your data still save.



    Should I check if the zmirror is fine?

    Sure. You added complexity so now it's up to you to test. Not only once (before you put productive data on your new storage implentation) but regularly. If you are not willing to test whether the redundancy you use now works as it should then you clearly don't need this redundant implementation anyway.

    • Offizieller Beitrag

    IMO there's really no need to discuss how zpools work since they work as they're designed. I was answering to your conclusions/assumptions before:

    If a vdev is lost then the pool is lost and there's no need to think about 'recovering' but restoring the latest backup now. About 'increased risk' when adding more vdevs, yes basically true and that's why you want to use redundancy at the vdev layer to prevent a vdev failing (be it RAIDZ or zmirrors, even with the latter you can throw some more disks on it if you want to survive more than one disk failing in a zmirror). But this 'risk' only affects availability and when a vdev is gone... you restore from backup.
    So as long as you have a backup ......... /-----/

    Other than your finer points regarding backup in your post (duly noted):


    The main point behind this thread (lose a vdev, lose the pool) remains the same. That assertion is externally referenced, peer reviewed, and it supports the substance of the remainder of the thread which mentioned, at the end, "with solid backup that you trust, the pool risk is no big deal".


    There's little point in rehashing this.

  • question: how should I check how much it will take to completion when I run the scrub from the OMV interface?
    I just finished to copy 1TB of data on y ZFS mirror, I did a last scrub, then I'll reboot my NAS and check that everything is ok

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • question: how should I check how much it will take to completion when I run the scrub from the OMV interface?

    I do not know a possibility in the OMV WebUI. Maybe you can see this in some diagnostics page.


    Again the CLI is your friend: zpool status your_pool reports the scrub progress and gives an estimation how long it will need to finish.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    question: how should I check how much it will take to completion when I run the scrub from the OMV interface?
    I just finished to copy 1TB of data on y ZFS mirror, I did a last scrub, then I'll reboot my NAS and check that everything is ok

    In the WEB GUI, in the ZFS plugin, Click on your ZFS pool line:


    Then click on Details, on the far right. While there's more information below (options and other), the popup window will display the equivalent of the zpool status poolname command.

  • I do not know a possibility in the OMV WebUI. Maybe you can see this in some diagnostics page.

    ;( Sorry, that was wrong.


    Then click on Details, on the far right. While there's more information below (options and other), the popup window will display the equivalent of the zpool status poolname command.

    Thank you @flmaxey for your amendment.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!