RAID progression

  • I am in the process of setting up a media server based on OMV and intend to use RAID 5 for the data storage. Ideally I want start with a single 8 TB drive (which will be big enough for a while) and then add more as and when finances allow. But can I do this without having to backup, format and copy it all back again?


    Ideally I'd like to start with a single HD on it's own. Then add another identical HD for security (not additional storage) then be able to add another to grow the size of the array (and then another etc.). Would this be possible?


    I am fully aware of what RAID means and not being a backup, that is not the issue. It's just how OMV's RAID implementation (MD I believe) can cope with growing from a single independent HD to an array of 5 drives. So hope someone can advise.

    • Official Post

    But can I do this without having to backup, format and copy it all back again?

    No

    Then add another identical HD for security

    There's security in mdadm, must have missed that :)

    then be able to add another to grow the size of the array (and then another etc.). Would this be possible?

    No


    8TB drives in an mdadm raid take a lllllloooooooonnnnnggggg time to sync, Raid 1 allows for one drive failure, so let's say you have to replace a failing drive within that mirror, during that sync it stops because the good drive has failed. Bye, Bye data, the same is for a Raid 5 allows for 1 drive failure within the array, whilst the raid is rebuilding 1 of the 2 good drives dies, bye, bye data.

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 7x amd64 running on an HP N54L Microserver

  • You can setup a degraded RAID1 (i.e. a single disk from a mirror) using command line mdadm tools - but this is not supported in the OMV Gui and i assume you would get incessentantly spammed re the array being degraded ?


    Craig

    • Official Post

    Yeah i know - but no worse than having no RAID as he intends to do - and gives him a path to start with a get to RAID in the future.

    Have you ever built a raid with 8TB drives, paint dries faster ^^ I don't have issues with users setting up a raid;


    1) They understand the potential pitfalls

    2) They have some understanding of mdadm command line tools

    3) They can troubleshoot errors themselves and know how to resolve them

    4) They have a backup in place should the raid go down the sh!tter

  • Yep - i am right in the middle of doing a RAID with 4 x 6TB and 2 x 8TB drives right now !


    Yep rebuilding one from a failed member is not a funtime !


    Thats why i did not tell him how to do it - if he wants to go and research he can and will hopefully learn enough along the way !


    Craig:)

    • Official Post

    Yep - i am right in the middle of doing a RAID with 4 x 6TB and 2 x 8TB drives right now

    As one single array, and you'll lose the space on the 8TB, but you know that anyway :)


    8| you must have a lot of painting to do :D:D I've gone back to using zfs

  • Yep i know that - i previously tried it from the commandline and OVM did not appear happy as i use partitions rather than raw disks.


    the 8TB drives are new to the system and i wanted to make sure that doing it the "OMV" way woudl get them OK before i go back - blow it all away and do it with partitions so i can see how OMV handles it.


    Craig

  • …8TB drives in an mdadm raid take a lllllloooooooonnnnnggggg time to sync, Raid 1 allows for one drive failure, so let's say you have to replace a failing drive within that mirror, during that sync it stops because the good drive has failed. Bye, Bye data, the same is for a Raid 5 allows for 1 drive failure within the array, whilst the raid is rebuilding 1 of the 2 good drives dies, bye, bye data.

    Why such a big downer on RAID? As I said, I am well aware of what RAID means and of course level 5 was specifically developed to cope with hardware failure of a single drive and if a second fails before the first has been replaced and the array all sync'd, then obviously data will be lost. But what's the alternative? No RAID means single failure and data IS lost, are you trying to suggest that is a better solution.


    As I said, this is for a media server and if the data is lost, that's just some missing TV time. Not gonna cause WW3 to break out and it doesn't warrant the significant cost of duplicating the entire storage space for a full backup which is also not required for individual file reversion reasons. The files will be stored and then not changed. So I only need to prevent total data loss from HDD failure and the chances of 2 drives failing at the same time are remote. Not impossible I grant you, but highly unlikely. Certainly a far smaller chance than that of a single drive failure.


    So, I do wish to take advantage of what RAID 5 offers, i.e. a good chance to avoid loss of data due to the failure of a single drive. I was using RAID 5 very effectively 30 years ago and I cannot believe it is less effective now than it was then.


    I've similarly been using the command line in *nix for 30 years now so do not presume to think I am ignorant and incapable of doing so now. However, for speed and simplicity, I would prefer to be able to administer this server using the web GUI as much as possible, but first priority is to be able to achieve the functionality I require and if that requires using the CLI, then so be it.


    Having said that, the server is not yet complete and I have no prior experience of OMV, nor mdadm on any flavour of *nix, hence my questions here.


    Why is RAID 5 based on 8 TB drives such a bad idea. Does array recovery take more than twice as long as for 4 TB drives? Or is there some capacity threshold above which recovery time increases exponentially? Just trying to understand the reasons for your criticism of a large RAID 5 array in mdadm.


    Since there's no need to duplicate the entire data store as a full backup (and certainly want to avoid the significant expense of that), but you opine that RAID 5 is such a bad idea, what is your suggestion?

    • Official Post

    but you opine that RAID 5 is such a bad idea, what is your suggestion

    I answered your questions from your first post, if that is sufficient for you then I fail to understand the relevance of your post above. As far as I am concerned I have no problem if a user wants to run a raid system it's all down to personal choice.

  • Yeah OMV uses the full block device, TBH I never tried partitions but creating the raid on the cli it should still display in OMV's raid management, but you are then committed to using the cli for raid management.

    Yeah its how i have always done it on my home systems anyway - it give me flexibility to rehome older drives from online production to nearline backup and get a few more years of work out of them


    WIll report back (in about 50 years when it finishes !!)


    ||

    • Official Post

    it give me flexibility to rehome older drives from online production to nearline backup and get a few more years of work out of them

    Interesting, I have about eight 3.5 drives I have decommissioned, but they are all various sizes, but I have a habit of using these as individual archive drives, so they get connected if and when needed.


  • The issue with RAID 5 is a farily straight forward one as the drives get bigger.


    1) Usually people will buy a couple (at least) of drives at the same time - so they have about the same amount of wear and tear on them.

    2) lets say one fails, you are diligent, diagnose the issue the same day and get a replacement drive the same day and install it.

    3) There are a sequence of commands to go through to remove a failed drive and replace with a new one - easy to stuff up and get wrong on the way through - but ignore that as a problem and assume you get it right

    4) There is an incredible amount of stress on all of the remaining drives (some of which are either older or the same age as the failed drive) whilst rebuilding an array and adding a replacement drive - every drive has to be read for every block, the parity calculated and then written back amongst each of the drives, if you get a single error thrown during this process from any of the reads, calculates, writes then another drive will be marked as offline as your whole raid goes off the air - the more drives you have (and the larger they are) the more the chance that this happens

    5) The theoretical limit/perecentage for this happening passed for 8TB drives in about 2019 i believe for RAID 6 - RAID5 i believe was never recommended for the same reasons.

    6) Although i do not use it - i would recommend on reading up on Snapraid and Merge/UnionFs as maybe smarter ways for a media environment.


    Craig

  • As one single array, and you'll lose the space on the 8TB, but you know that anyway :)


    8| you must have a lot of painting to do :D:D I've gone back to using zfs

    My understanding is that ZFS has no way to expand a drive pool/array ?


    In the past my procedure (prior to OMV) was as follows


    1) RAID6

    2) Get low on space

    3) Purchase a new economical drive as least as large as the largest in my array - usually a couple of TB bigger

    4) Shutdown box and add additional drive (after stress testing offline prior)

    5) Whils the box was running - partition new drive to appropriate size for the smallest member in the array

    6) Add to the array adn expand - WAIT A LONG TIME !

    7) Expand the LVM PV/VG/LV for the extra extents

    8) Expand the FS

    9) Add additional partition(s) on the spare space on the new drive - at least mirror with another partition, if not RAiD10 or RAID6 depending on how many other drives and partitions available and repeat 6,7,8 above - this let me get maximum space out of new drives and rotate out older ones


    I have followed this process on my media server since 1TB drives were in and have slowly worked my way through 2TB, 3TB, 4TB, 6TB and now 8TB drives - takes a bit of work but once documented for each of the steps is pretty much a no brainer and give me no more than a 1 reboot downtime to add the physical drive and a 2nd reboot at some point to retire older drives in the reverse process.


    These older drives then get moved across into my backup serve which is powered on once a day to backup all changed media files and any other critical machines (Vmware cluster) and then shutdown again.


    The one thing that has stopped me looking in the past at Freeenas was that it only did ZFS and that there is still no way to expand a pool (plus the memory requirements are crazy. As we are talking predominantly media files here i am not concerned about bit rot, anything that is critical is backed up to remote storage on an hourly/daily basis using Rsync - i can see that ultimately i will end moving to BTRFS - but it will be a while until i am happy its RAID6 implementation is right.


    Craig

  • Here you go - the RAID5 array creation completed after about 18 hours using the standard OMV way through the web gui


    1) Wipe each device

    2) Go into RAID arrays

    3) Choose to create new RAID 5 array and choose the 6 drives (4 X 6TB and 2 x 8TB) - knowing i would be loosing the 2TB at the end of each 8TB drive


    This completed after about 18 hours and appeared fine - survived a reboot etc.


    Then blew it all away through the web interface


    Went to the command line - made sure there were no superblocks or raid definitions in the mdadm file.


    Created the same size partitions on each of the 6 drives using gdisk (i always leave about 100000 spare blocks at the end as i was once burnt when i tried to mix and match vendor drives and found that 4TB does not mean the same thing to each vendor !)


    Created the array from the commandline using mdadm --create etc etc


    And this then shows up in the GUI - will be interesting to see what it shows at the end - especially when i add the 2nd partitions to the 8TB drives and put them into a RAID1 array !


Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!