OMV and Raid controller

  • Hello, new registrant I come to see you because I will soon install OMV on a HP DL 380th GEN8.


    Here are the features of the server if you need them.


    HP DL 380e Gen 8
    CPU: 2 x Intel XEON E5-2450L
    Ram: 32 GB DDR3 ECC
    RAID-Controller: Smart Array P420 with 1 GB Battery + B120i Cover (B120i = fake raid)
    Disk: 12 x 3.5 "+ 2 3.5"
    Network: 4 x 1 GBit
    Power supply: 750w



    I'm coming to you for advice, so far I'm only using software RAID via mdam under debian stretch and not hardware RAID, off I wanted to have more than 8 disk slots on the same server which is complicated without a RAID card.
    I already have the server, but still waiting for caddies to start testing with OMV (probably version 4).the purpose of this installation is to make 2 RAID 5 of 6 4TO disks, at first I would leave on a base of 3 disks sui are the minimum for a RAID5.


    Not really knowing the hardware RAID I wanted to know if I could have tips for avoiding mistakes. I know that the CPU and Ram are disproportionate for the project, but this config was less expensive than turnkey solutions that I do not want.


    So are there important things to know about installing and configuring OMV with a RAID card?
    If ever I forgot important things do not hesitate to tell me, I will answer you as best as possible.
    I am self taught in IT and it is a personal installation and not professional.


    P.S: Sorry i do not speak english, so i dont be on the translation that google give you.

  • For such a big array I would not use classical mdadm RAID, because in case of a disk failure you have a very long resilvering time. Neither I would not use a hardware RAID because the OS has no control what is going behind the RAID controller.


    Your server supports ECC ram. Therefore it is predestinated for the use of ZFS. In your case I would try to activate the HBA-mode of the Smart Array P420 controller. The tool to do this is called "hpssacli", which is part of the HP service pack for Proliant. In HBA mode the controller is transparent for the OS and RAID is then managed by the OS.


    After doing that you have a perfect machine for OMV with ZFS.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Thank you for all this clarification :).


    I read some docs on if I saw well, it allows other RAIDZ1 and RAIDZ2 equivalent to RAID5 or 6, is that it?



    Because I thought I read that it was not possible to add discs directly to the pool? as far as I know the principle and advantages / disadvantages of RAID 5-6, as much if you suggest RAIDZ, I do not know.
    The idea is to gradually add hard drives to the raid. I was going for the moment on a RAID 5 base, because it might take me too much time to save to boot from a RAID 6 base.


    While waiting for answers, I will learn more about the ZFS format.


    Edit:I just saw that we can add disks to the zfs pool, I have trouble reading or my sources was unreliable.


    Edit 2 : At the moment I can not perform a test on the server yet. I think I'll come back to you when it's turned on.

  • I read some docs on if I saw well, it allows other RAIDZ1 and RAIDZ2 equivalent to RAID5 or 6, is that it?

    Yes that is very similar.

    Because I thought I read that it was not possible to add discs directly to the pool?

    It´s different to classic RAID. It is possible to increase the capacity of a pool by adding a so called 'vdev'.


    A vdev can be a single disk up to a complex RAIDZ3 structure. E.g. if you have a pool with a RAIDZ2 vdev and you want to double the capacity you can add a second RAIDZ2 vdev to the pool.


    The idea is to gradually add hard drives to the raid

    As far as i know currently it is not possible to increase the capacity of a already created vdev by simply adding more disks.
    Therefore the use of ZFS requires some kind of planning how to handle data growth.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • Thank you for all information and documentation, for my part I found the information on the addition of disk on this tutorial (Sorry it is in French, but the orders remain the same if she ever speak to you): tutoriel-tuto-zfs-linux (the article was last updated in June 2017). After the question is whether there is the same functionality or the tutorial that I mentioned uses just the term "Enlarge a ZPOOL" instead of "vdev"?


    Edit:
    I'm reading the documentation you gave me and it's exciting again thanks.


    i just have a question regarding the raidz if you or someone to get back to this. Here is an answer from a French speaking user found on the internet. When do you think ?




    Zitat

    Basically there is no fsck for ZFS, your FS is sencé never have flaws.




    Now, it happens, especially when you have filled too much your disk, so that you have little space left for ZFS automatic "hot maintenance / repair" operations (autorepair: basically ZFS, for example in RAIDZ is able to detect an anomaly on a member of the raid (faulty block ...) and use
    another member's info to fix the problem.
    Copy of blocks, etc. If there are no more blocks available => fail, and no fsck tool to repair)

    For my part, I feel that it is not really accurate or that the information is wrong.


    Because if I refer to the documentation you sent me, this can be avoided first with "Copy-On-Write" and with "Transaction Group", "Snapshot", "Checksum," Copies ", I have not finished reading everything yet, but contrary to what I put above in quote.



    There are several tools (not necessarily all active by default but existing) to counter the lack of repair tools. Am I wrong?



    Sorry for all these questions, but not knowing how this works, I prefer to ask questions, than to start on the wrong foot or to base myself on bad information.

    Einmal editiert, zuletzt von ducksama () aus folgendem Grund: Adding information

  • I am not sure if I understand your last post completely. AFAIK a dedicated repair tool is not available and also not necessary. Nevertheless a so called 'scrub' should be done periodically. This checks the integrity of the ZFS file system.
    Quote Aaron Toponce: "With ZFS on Linux, detecting and correcting silent data errors is done through scrubbing the disks."


    And yes, it is recommended to not exceed 80% of the pool size:
    Quote Aaron Toponce: "Keep pool capacity under 80% for best performance. Due to the copy-on-write nature of ZFS, the filesystem gets heavily fragmented."


    Aaron Toponce: Zpool and ZFS administration

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    2 Mal editiert, zuletzt von cabrio_leo ()

  • Okay, your message brings the answers to the questions I asked myself. I'm still waiting for the delivery of my caddies to start the tests, I will give you my feedback so I hope it will be useful in turn, as you have been for me :thumbup:

  • Hello, I finally received my caddies.

    For such a big array I would not use classical mdadm RAID, because in case of a disk failure you have a very long resilvering time. Neither I would not use a hardware RAID because the OS has no control what is going behind the RAID controller.


    Your server supports ECC ram. Therefore it is predestinated for the use of ZFS. In your case I would try to activate the HBA-mode of the Smart Array P420 controller. The tool to do this is called "hpssacli", which is part of the HP service pack for Proliant. In HBA mode the controller is transparent for the OS and RAID is then managed by the OS.


    After doing that you have a perfect machine for OMV with ZFS.

    I am having trouble during the boot phase, After trying to install OMV 4 and debian 9, the installation goes smoothly and during the reboot after the installation is finished no disk is detected as bootable.


    On OMV I wondered if I did not have a problem with GRUB as it happens (indicated at the bottom of the installation documentation) as much on Debian 9 I choose the DD where should be installed GRUB and no problem is detected.


    Do you have any leads or already meet this problem? What surprises me the most is to have the problem also on the installation of debian 9 that I have already installed many times without having to worry about other machines.


    EDIT: I would try tomorrow "Erase Utility": Reset all settings: Clears all drives, NVRAM and RBSU, I read that there may be a purge problem of a previous installation. I'll let you know.

  • If you still have problems with your installation after trying the utility, I would recommend you to open a new thread in the forum, if it is no more RAID or ZFS related.


    I wish you success :)

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!