Mirror disappear after reboot

    • Offizieller Beitrag

    Your claim is based on what exactly?
    Btrfs' RAID-1/RAID-10 is stable since ages it just needed more than 2 devices since with a simple two disk mirror and one disk failed the mirror went read-only. Problem known, use 3 disks, done.


    According to official docs this has been fixed with 4.13 now, status is 'OK' while read performance still could be improved: https://btrfs.wiki.kernel.org/index.php/Status

    As I read the same page today, it says BTRFS RAID1 is "Mostly OK". When Dev's and their doc's say "Mostly OK", that does not equate to "OK". Still, to my surprise, there's been a change in a status since I last looked at it, which was just a couple months ago.


    With reference to, BTRFS RAID1 unning in degraded mode (1 disk) and failing into a permanent "read only" mode. (Which was the primary reason why I refused to adopt it.)
    As I dug into it (getting on the BTRS project mailing list), the Dev's said that they had corrected most of the problems with RAID 1, a few months ago, but since BTRFS is rolled into the kernel development cycle, there was no knowing when these fixes would be released and make it out into userland.


    Here's the github pull request on July 4th, 2017 for ver 4.13. Have these changes made it out to user land yet, in normal release cycles? In the aggregate, unlikely. When will they get out? Who knows. (And the answers to these questions are not my opinions, they're those of the Dev's.) Progress has been "glacial".


    As I see the BTRFS version on my own box, it looks like I have 4.7.3So, it appears that the RAID1 BTRFS instability, in userland, is still an issue.
    _________________________________________


    So, in the way of practical advice for Blabla, I stand behind what I said earlier. (And I don't see "use 3 disks - done", as being practical for running a RAID 1 equivalent mirror.) ZFS is stable, mature and ready to go now, with a plugin that's easy to install, and it will require only 2 disks.

  • As I read the same page today, it says BTRFS RAID1 is "Mostly OK".

    In the performance column. And the info applies to most current kernel (4.13) of course. I don't care what you recommend to whoever, I just want that spreading wrong information here stops (the 'BTRFS RAID1 ... not stable and it's unlikely to be for the next couple years' you wrote above).


    Anyone wanting to deal with non anachronistic modes has to do his own homework which is pretty easy since all this information is freely available on the net. And yes, a zmirror is something different than a btrfs RAID1 and both differ from mdraid's RAID1 implementation.

    • Offizieller Beitrag

    In the performance column. And the info applies to most current kernel (4.13) of course. I don't care what you recommend to whoever, I just want that spreading wrong information here stops (the 'BTRFS RAID1 ... not stable and it's unlikely to be for the next couple years' you wrote above).
    Anyone wanting to deal with non anachronistic modes has to do his own homework which is pretty easy since all this information is freely available on the net. And yes, a zmirror is something different than a btrfs RAID1 and both differ from mdraid's RAID1 implementation.

    Before you jumped in to "correct" everyone, this thread was about a doable, practical solution for Blabla's mdadm issue. That, "practical", means what can be done now; not in 6 months, 1 or 2 years from now. "Practical" also might be construed to include what is available in userland, in normal release cycles - not the latest releases on GitHub which are still subject to testing and revision.


    Right now, the BTRFS RAID 1 instability is on the boxes of the majority of the users of OMV. Right now, if Blabla builds a BTRFS mirror, he may be exposed to the permanent read only condition if he loses one of two hard drives. That's not wrong information. Need more be said?


    As kernel/BTRFS) development progresses, when 4.13 finally arrives in userland, the issue may purge itself. But from a practical stand point, speculation on what "might be", "when it could be", and similar discourse is devolving from the thread topic which is - let's remind ourselves once again - a stable RAID1 solution that is available to Blabla, right now.

  • ZFS started on an Unix (Solaris), it has then be adopted by FreeBSD, Linux and others.


    On Linux (you're talking about, not 'Unix') there is and has been a lot of confusion about licensing issues (see here for example) but technically ZoL (ZFS on Linux) works great, especially most recent version 0.7 and above (not usable with OMV yet).


    And now that Oracle (who bought Sun together with ZFS and Solaris years ago) killed Solaris we might see ZFS on Linux rising even more.

    why isn't ZFS 0.7 usable on OMV yet? it will be supported ny OMV4?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • So there is really no other suggestion other than "buy an external box for the second hard drive"? :( As someone suggest int the zfs topic, with my build zfs is not the best solution
    If possible I would lie to have my RAID1 working :(


    Do you think that is worth to try to create the RAID1 directly from the cli and call it md128 instead of md0?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    2 Mal editiert, zuletzt von Blabla ()

    • Offizieller Beitrag

    An external box?


    If you have mdadm RAID1 and OMV running on your server right now, you can run OMV3.0 with the ZFS plugin. (You can.) You have 2 each 4TB WD Red's right? Those 2 drives are all you need for a ZFS mirror. (**Along with your backup host or device, to put your data back on a newly created ZFS mirror.**)


    If the upgrade path worries you:
    I'm reasonably sure that if you wait for OMV4 development to proceed, remember it just got started, it shouldn't be too long before upgrading to OMV4 and it's version of ZFS may be possible.
    ___________________________________________


    You have the thread to get started - as noted before: ZFS Thread



    Scroll down to "Setting up ZFS on OMV is pretty straight forward" and start there.
    If you have an an up to date OMV3 build, you wouldn't have to rebuild from scratch. Just add on the ZFS plug-in (Either way, your call.)


    - The only differences for you would be, when you get to the Create ZFS Pool dialog box, in Pool Type you'd select Mirror (for your 2x4TB drives). And given the latest on the subject, using lz4 in the compression entry (with you get to edit properties) will work fine. Otherwise, leave it as it is.
    - The rest applies.
    - After the pool is created, the maintenance job items are optional but a once a month scrub is a real good idea.


    Not even looking at the benefits of using ZFS:
    The ZFS suggest was to keep you from having to reassemble your mdadm RAID1 array, every time you reboot.
    If you want to continue using mdadm RAID1, the only practical recommendation I can come up with is, get a good UPS.

  • - After the pool is created, the maintenance job items are optional but a once a month scrub is a real good idea.

    As @subzero79 explained here a scrub job which is executed the second Sunday of every month is already defined after installing the ZFS plugin. And in my personal experience it is really executed :)


    And as I explained here, if the ZFS event daemon (ZED) is configured correctly with a valid email address, an email is sent, when the scrub job is done.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    As @subzero79 explained here a scrub job which is executed the second Sunday of every month is already defined after installing the ZFS plugin. And in my personal experience it is really executed :)
    And as I explained here, if the ZFS event daemon (ZED) is configured correctly with a valid email address, an email is sent, when the scrub job is done.

    Thanks for that! :) You're proving to be a fountain of information and knowledge on ZFS in general and, in particular, the way it's implemented on OMV. The second Sunday of this month just passed so I can check the log / pool status. Yep, it's there. With scrubs running two times a month, there's no point in configuring a third.


    On ZFS events, I see you even included scripts! :thumbup: I'll look them over and apply. Again, thanks. :)
    _________________________________


    But for beginners a "point and click" way to use the Web GUI to generate zpool status reports and E-mail notifications may be useful as well. Being able to see currently hidden ZFS jobs in the GUI would be helpful, but that's asking a lot of the Dev's.
    (We really need a beginners guide that covers some of this.)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!