RAID error message

  • Hello together,


    sorry for my bad english but I hope you understand my sentences.


    On the search for a NAS-System I found OMV and have tested a little bit the software. For me was the RAID function important.


    Therefore I have setup a RAID5 with 4x5GB drives. Then I have tried what is happend if I add a 10GB drive (instead of a 5GB drive). Unfortunatly I get the following error message. But the RAID will created correctly. Is the messages only cosmetic or the feature not supported from mdadm ?


    - OMV 5.2.2-1 with last updates
    - running on a VMWorkstation 15.1


    Thanks for your help


    Regards Mario


  • Is the messages only cosmetic or the feature not supported from mdadm ?

    I would not say that this is only "cosmetic". Did you try to reboot? Then I would expect that you´ll get an degraded array. @geaves!?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • For me it looks correct please view yourself. The state is the same after a reboot



    The question is, if a greater drive size is supported from mdadm ? Currently this is only a test for me but in a real environment I will never have the same replacement drive in a couple of years.


    Thanks again
    Mario

  • The question is, if a greater drive size is supported from mdadm ?

    Yes and No ;) . Of course you can replace with a bigger drive, but the used size depends on the smallest one in the array. In your case only 5 of the 10GB is used. "Hybrid" arrays are not supported by mdadm.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • In your case only 5 of the 10GB is used. "Hybrid" arrays are not supported by mdadm.


    That's no problem. For me was important to can use other sizes (bigger) different from the initial drives. Older RAID implementations doesn't support that.


    To my main question. Can I switch a debug logging on for better analyzing? Then I would reproduce the error message.

    • Offizieller Beitrag

    Then I would expect that you´ll get an degraded array.

    I read this earlier, but I've no experience of creating a Raid device using what in effect is a virtual disk.


    There are 5 'drives' 4x5GB and 1x10GB the array size is correct as it's created based upon the smallest drive within the array.


    So as an example if the raid was set with 4x10GB and 1x5GB the array size would still be the same as displayed with the above.


    Can you use mismatched drive sizes within an array, yes you can, but the trade off is the array is created based upon the smallest drive.


    So if you created a raid with 4x5TB and 1x10TB the raid size would be the same if you had used 4x10TB and 1x5TB, there are better ways to use the storage with mismatched drive sizes.

  • In my final system I will use 3x6TB drives. But maybe I must extended the NAS in the future. Then I have no guarantee to get a 6TB model and would use a larger drive. Therefore my test. I know the disadvantages.


    thx@all

    • Offizieller Beitrag

    In my final system I will use 3x6TB drives. But maybe I must extended the NAS in the future.

    Then your test is ok, the error is simply telling you that you cannot whatever it was you were attempting because the array at that time was resyncing, it's in the error.


    You can use the GUI to remove a drive and add another, much easier than using the command line.


    Personally I have no problem if a user wants to implement a Raid option, but me, I would never use large drives for this due to the time it takes to re build with large drives.

  • I understand your notice but I don't never thought that 6TB models are large drives at the present time. For me is a 12TB model a large drive. Because I have only 4xSATA connectors 6TB drives were the only realtistic size for me. My goal was to start with a entire capacity of 12GB (RAID 5) and the possibitlity to grow up to 18GB. I understand the long time for reshape but I don't need the storage 24/7 and therefore the sync can take some time (until 1 week).


    To the error message. The RAID array was in "clean" state as I have added the 10GB drive. And after close the error message, the drive was successful added. I'm not sure if the array was really in an internal sync state. But I can repeat the test again and wait a time before I add the last drive. How long should that be?


    thx
    Mario

    • Offizieller Beitrag

    But I can repeat the test again and wait a time before I add the last drive. How long should that be?

    Hard to say, but the error clearly states that it was in an rsync state which meant if you were attempting to add another drive it was simply giving a warning as to why it could not comply at that time.


    TBH you could look at MergerFS + Snapraid if your Raid and system is not being continually accessed, this is what I have moved to from a Raid 5 setup.


    Whatever you do good luck with it.

  • Hello geaves,


    I understand that you don't recommend large RAID5 arrays for OMV. What do you think about RAID10 if OMV supports that? Is the restoration time also long as RAID5?


    thx
    Mario

    • Offizieller Beitrag

    What do you think about RAID10 if OMV supports that?

    AFAIK it does, but for that you will need to use 4 drives, theoretically you would see a 4x read and 2x write performance, but as like raid 5 fault tolerance is one drive, so 4x6TB=12TB of usable space.


    Raid 5 3x6TB=12TB of usable space theoretical read 2x write virtually nothing, fault tolerance 1 drive


    Raid 6 4x6TB=12TB of usable space read and write the same as Raid 5, fault tolerance 2 drives


    The other option is MergerFS + Snapraid


    3x6TB drives 2 for data and 1 for the Snapraid parity, usable space 12TB, fault tolerance 1 drive, downside you cannot locate docker on the MergerFS mount point unless you change MergerFS options in the plugin.


    The downside to a raid config using larger drives is the time it takes to re-sync after replacing a drive, which could lead to another drive failing during the process. I've had this happen in the workplace and they were 500GB SAS drives.


    What's important is a backup no matter what you choose to use, there are users on here that have 8TB drives attached to an SBC, but that SBC is being backed up to probably another.


    At the end of day it's 'horses for courses' you use what you are comfortable with :)

  • My idea is a little other. With a RAID10 configuration I would start with e.g. 2x8TB (as RAID1). If I need additional drives in the future then the costs for 10 or 12 TB drives maybe the same as the first 8TB. Then I would only merge the two RAID1 (8/8GB and e.g. 12/12GB) arrays to one RAID10. Is these possible or does it need that every drive have the same size?


    What do you think about the restauration time in this case? Better as with RAID5.

    • Offizieller Beitrag

    What do you think about the restauration time in this case? Better as with RAID5.

    Don't know that answer, but restoring a raid 1 would be quicker than a raid 5, but add in raid 10 then my guess would be you're back to raid 5 times as it combines mirroring and striping.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!