Mirror disappear after reboot

  • Hi everyone.
    I'm having a problem build a mirror with raid managament.
    I'm tring to build a mirror with two Seagate Ironwold 6TB. The problem is that every time that I build it, after the reboot it will disappear.


    If I try to rebuild it the data will still be there, so I'm not losing anything, but I don't wont to rebuild the mirror every time that I need to reboot my nas!


    Can someone help me trying to resolve this problem?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Can someone help me trying to resolve this problem?

    I've been trying to figure this out for years and haven't come up with anything.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    So it's a known problem?

    Known problem for some. I've never had an issue. Just read the Debian, Ubuntu, RedHat, etc forums and you will see reports. This isn't something OMV is causing. I've always thought the selection of hardware has something to do with it. I also think the number of times the server is shutdown affects it as well. Just another reason not to use raid (most don't need it anyway - I don't on most systems).

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So there is no solution? In that case I'll need to send my hdd backs and buy a different model :/

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    So there is no solution?

    Don't use raid?? If I had a solution, why would I still be trying to figure it out?


    In that case I'll need to send my hdd backs and buy a different model

    Why? Format them each as ext4 and setup rsync to sync them hourly or something.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Don't use raid?? If I had a solution, why would I still be trying to figure it out?


    Why? Format them each as ext4 and setup rsync to sync them hourly or something.

    Maybe on some other forum someone found a solution.
    I'll try one last time to build the raid, then I'll think about your solution :)


    thanks!

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Maybe on some other forum someone found a solution.

    Just to warn you... I've looked. There are plenty of solutions that work some of the time but none that work all of the time. Each time you sync that array, it is a lot of wear & tear on your drives as well.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Can you link me some solution, or suggest me how to find them? :( i'm searching for things like "linux mirror disappear", but I can't fiind much :(

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Can you link me some solution, or suggest me how to find them? i'm searching for things like "linux mirror disappear", but I can't fiind much

    You really are hoping this magically fixes itself. If only it were that easy. Important search terms are mdadm, missing or degraded.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks! I found a lot posts now.
    I found this one but I don't know how to do it:

    • Your RAID device is not being discovered and assembled automatically at boot.To provide for that, change the types of member partitions to 0xfd (Linux RAID autodetect) — for MBR-style partition tables or to 00FD (same) for GPT. You can use fdisk or gdisk, respectively, to do that.mdadm runs at boot (off the initramfs), scans available partitions, reads metadata blocks from all of them having type 0xfd and assembles and starts all the RAID devices it is able to. This does not require a copy of an up-to-date mdadm.conf in the initramfs image.

    What method to prefer, is up to you. I, personally, like the second but if you happen to have several (many) RAID devices and only want to start several of them at boot (required to have a working root filesystem) and activate the rest later, the first approach or a combination of them is a way to go.


    I would try that since everything else didn't worked :(

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    I found this one but I don't know how to do it:

    OMV doesn't use partitions for mdadm raid arrays. So, this doesn't apply.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Can you link me some solution, or suggest me how to find them? :( i'm searching for things like "linux mirror disappear", but I can't fiind much :(

    Since this seems to an mdadm RAID issue:


    Have you given thought to a ZFS mirror? I've been running a ZFS mirror for awhile and it's been trouble free. You're going to get other benefits that mdadm doesn't offer as well, like bitrot protection, self healing, etc.


    If you have backup, now would seem to be the time to give ZFS a try.

  • never tried ZFS, I thought that it was a FS of freebsd

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    There's a ZFS plugin, for OMV, and it works just fine. (openmediavault-zfs 3.0.18) If you install it, I did a little "How To" in a thread. ZFS Thread
    Scroll down to "Setting up ZFS on OMV is pretty straight forward" and start there.


    The only difference for you would be, when you get to the Create ZFS Pool dialog box, in Pool Type you'd select Mirror (for your 2x4TB drives).


    As noted in the thread, after the pool is setup, I'd set up a monthly scrub to take full advantage of ZFS's self healing properties. The other stuff is, obviously, optional.

  • Don't want provoke here, but should it better to use freenas over omv if you want to use ZFS?
    I mean, for what I now ZFS on Unix is a non official porting. For stability and security reason shouldn't it better to use ext4/btrf over zfs if I want to remain on unix?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • I mean, for what I now ZFS on Unix is a non official porting.


    ZFS started on an Unix (Solaris), it has then be adopted by FreeBSD, Linux and others.


    On Linux (you're talking about, not 'Unix') there is and has been a lot of confusion about licensing issues (see here for example) but technically ZoL (ZFS on Linux) works great, especially most recent version 0.7 and above (not usable with OMV yet).


    And now that Oracle (who bought Sun together with ZFS and Solaris years ago) killed Solaris we might see ZFS on Linux rising even more.

  • First off, you are not on Unix.


    Secondly, many packages have been ported between Unix like variants, one of the most widely ported being OpenSSH. I would venture to say that the number of installations using a ported version of OpenSSH far exceeds the number running on the Unix like variant it was originally developed for.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    For stability and security reason shouldn't it better to use ext4/btrf over zfs if I want to remain on unix?

    After looking it over the ZFS Thread, when you "edit pool" properties, you might not want to turn compression on. It wouldn't hurt anything to turn it on and it might save some disk space. Either way, your call.


    When you mentioned security above, the other pool edits are done to duplicate (more or less) Linux file/folder permissions. For all practical purposes, after the edits, security will be the same on the ZFS pool as it would be in ext4.


    BTW: I passed up BTRFS RAID1 because it's not stable and it's unlikely to be for the next couple years. If you want a functional equivalent of mdadm RAID1, that is stable, a ZFS mirror is the only real choice available.

  • I passed up BTRFS RAID1 because it's not stable and it's unlikely to be for the next couple years

    Your claim is based on what exactly?


    Btrfs' RAID-1/RAID-10 is stable since ages it just needed more than 2 devices since with a simple two disk mirror and one disk failed the mirror went read-only. Problem known, use 3 disks, done.


    According to official docs this has been fixed with 4.13 now, status is 'OK' while read performance still could be improved: https://btrfs.wiki.kernel.org/index.php/Status

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!