HP P420i, JBOD, Software raid vs hardware raid, etc

  • Hi,


    I'm currently in the process of migrating my data store which is located on a EOL (and too small) Synology device and on a HP/CentOS server running a few NFS shares. The end result is for a SMB/home use mixed together.


    The Synology device works well as far as I am concerned, but it's small size (4 drives) makes flexibilty a problem, especially since I am planning to go RAID 10 all the way (uptime is so important that whatever extra cost this implies is irrelevant in my context). My Linux server also works, BUT everything must be done manually and I wanted the ease-of-use of a UI to manage NFS/iSCSI/etc. (I know iSCSI and OVM 4 don't mix)


    So, at this point, I am thinking of a HP DL380G8 with a P420 raid card and a 25 bay SFF. The 25 bay will be overkill, but it gives me space to add a few drives in, migrate data to the new drive and repurpose the old drives (Many times over) and just add drives as needed - I tend to overspec my stuff, but it has always worked out for me.


    My questions are:


    - Should I go for hardware RAID 10 or put the raid card in JBOD mode and go software RAID 10, given that the RAID P420 card is already there?


    - One aspect I like about my older HP server/RAID (I am using the hardware RAID right now) is that when a drives fails, there is a led that lights up on the drive tray and I know which one to change. No questions need to be asked, I just do a backup and swap the very clearly identified drive. Will this still be the case if I use software RAID? I imagine the hardware raid won't show up as degraded in OVM, but will show up as degraded only in the HP ILO UI if a HD fails.


    - I know FreeNAS absolutely recommands to go with JBOD and software RAID, but it was also obvious from forum posts that a drive failure wasn't going to be as easy to fix as I mentionned above since you had to jump through hoops to know which one was broken. Is this the same thing with OMV, or will the UI give some info on the tray that needs to be changed?


    - Finally, if I use one drive for OS only and the other 24 drives (in various RAID configurations) for data, would I (for example) be able to wipe the OMV drive and install CentOS (or other OS) and still get to mount the data drives easily? How much to the data drives rely on OVM being there? (not at all, or alot?)


    My own little virtual machine test of OVM is positive, I like the UI, but I can`t simulate the P420 cards and tray leds on this, so I wanted to know what this all implies if I did buy this DL380G8. Also, if hardware RAID turns out to be the answer to my needs, I can downgrade to a G7 (the RAID cards don't do JBOD on the G7, so software raid would be possible with RAID0 but awkward UI-wise I believe)


    Thanks a lot for whatever input you can provide.

    • Offizieller Beitrag

    If up-time is that important, is this a business server?


    RE your questions:


    - I'd go with software RAID for a single, but significant, reason. Hardware RAID permanently ties the array to a specific model card. With software RAID, RAID flags are added to each block device which makes them portable. A software RAID data array can be removed from one hardware platform and inserted in another. Also, hardware RAID may have quirks that users may not know about. (Drive size limits, incompatible drives, expansion limits, etc.)


    - With software (mdadm) RAID, it stands to reason that software would have to be used to query the array to find out which drive has (or is) going south. However, in OMV, there is a button (Detail) in RAID management that will provide information on the drive members of an array. If you want to get a look at how this works, you can set up an array in your OMV VM. Just create a sufficient number of additional small virtual drives (I use 5GB per drive) and set them up in a RAID 10 array. (You could even populate them with data and set SMB shares.) If you set up a few extra virtual drives, you can fail (delete or remove) one of the working drives from the VM array and see what's involved in replacing it. Even if it's required to get on the command line, mdadm commands are pretty straight forward.


    - If you prefer rebuilds on the hardware RAID card, obviously, it's your call.


    - OMV runs on Debian, and requires the exclusive use of the boot drive, by default. If using software RAID, as noted above, I believe rebuilding the boot drive with a Debian based server would provide the best chance of recognizing and integrating a preexisting RAID array created by a Debian server. However, this is something that can be tested in a VM as well. I use Virtual box which has an easy to use VM cloning feature. Clone the VM, with the RAID array. Then rebuild the boot drive with CentOS, and see if it will integrate the existing array.


    - I doubt that the LED's on the card are going to work, even in JBOD mode. (Unless the card's LED's work with Debian.)


    - It might be possible to load OMV onto the Synology box. Using Google (or another search engine), search on terms OMV, install, Synology, etc., etc. You'll probably get more hits on this forum, this way, than using the forum's search function.


    _______________________________________________________________________________________


    If it was me, I'd use ZFS and I'd use zmirror vdev's to populate a pool, especially if this is a business server with a decent amount of RAM. You get it all in one file system, RAID drive aggregation, LVM functions, bitrot protection and muti-tiered, versioned local backup, with nearly no addition space required (for backup) with snapshots. But, if you haven't used ZFS before, there's a significant learning curve involved.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!