RAID question

    • OMV 2.x
    • Resolved
    • RAID question

      Hello

      I already have a RAID5 of 6 x 2TB drives
      I want add 4 x 4TB drives (and change the 2TB ones for 4TB in the future)

      What's the best RAID choice ?

      If I grow up my existing RAID, 1 drive failure with 10 drives isn't safe I think
      English isn't my native language, so, sorry if I made any mistake ^^

      OMV 4.1.6 | 64 bit | openmediavault-omvextrasorg 4.1.6
    • If you add 4 TB drives to an array of 2 TB drives, you will only use 2 TB on each of the 4 TB drives. Lots of wasted space. I ran raid5 on an 8 drive array for years. As for safe, raid isn't backup...
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • It is possible in the following steps (generic, and you need a lot of knowledge on raid to do it correct).
      1. Change your raid from raid5 to raid 6. You need one additional disk for it. ewams.net/?date=2013/05/02&vie…g_RAID5_to_RAID6_in_mdadm
      2. upgrade all disks in your raid from 2TB to 4TB.
        1. fail 1 active disk in your raid. Raid is now degraded
        2. Replace the failed disk with the bigger one.
        3. Reintegrate the disk into raid and start rebuild.
        4. After rebuild has finished start over from 1 until all disks are 4TB disk
      3. Up to this point, your raid has not grown and is still a raid with 2TB on each disk. So now grow your raid, so that it uses the full capacity on all disks. (next rebuild).
      4. After that you have the new raid in raid 6 and you cann add disks to it with 4TB. Actually again that is a rebuild every time and will take quite some time.

      So that is not the fastest way and you still should do a backup before all doing this and you should really know what you are doing, otherwise you will risk and most likely loose all data.

      So the other method (start from scratch with a raid 6 and use backup/restore should be much safer and also much quicker in total runtime of the whole operation.
      Everything is possible, sometimes it requires Google to find out how.
    • Thanks for the method

      But I think I will copy from the old RAID5 to the new RAID6, faster and safer

      IBM M1015 is already at home and flashed for pass-through
      I'm waiting for the 9 HDD's

      If someone in UE zone want 2TB HDD's I have 6 to sell ^^
      English isn't my native language, so, sorry if I made any mistake ^^

      OMV 4.1.6 | 64 bit | openmediavault-omvextrasorg 4.1.6
    • SerErris wrote:

      It is possible in the following steps (generic, and you need a lot of knowledge on raid to do it correct).
      1. Change your raid from raid5 to raid 6. You need one additional disk for it. ewams.net/?date=2013/05/02&vie…g_RAID5_to_RAID6_in_mdadm
      2. upgrade all disks in your raid from 2TB to 4TB.
        1. fail 1 active disk in your raid. Raid is now degraded
        2. Replace the failed disk with the bigger one.
        3. Reintegrate the disk into raid and start rebuild.
        4. After rebuild has finished start over from 1 until all disks are 4TB disk
      3. Up to this point, your raid has not grown and is still a raid with 2TB on each disk. So now grow your raid, so that it uses the full capacity on all disks. (next rebuild).
      4. After that you have the new raid in raid 6 and you cann add disks to it with 4TB. Actually again that is a rebuild every time and will take quite some time.

      So that is not the fastest way and you still should do a backup before all doing this and you should really know what you are doing, otherwise you will risk and most likely loose all data.

      So the other method (start from scratch with a raid 6 and use backup/restore should be much safer and also much quicker in total runtime of the whole operation.
      Thanks for this info. I was getting ready to ask the same question as the OP. I currently have a RAID 6 array with 7x1TB desktop drives (1 failing every few months. I got them for free so I can't complain) and am looking to replace the disks with 4TB NAS disks. I also use LVM to split up the RAID array. My case only supports 8 hotswap drives. I don't quite have the money to buy all the disks in one shot so I was thinking of buying them one at a time and slowly replacing the disks in my current RAID6 with the new disks but from the looks of it, that would mean I would need to get an exact replacement # of disks as the current amount since the 4TB disks would only use 1TB. Everything is backed up to a cloud service, but obviously I don't want to restore EVERYTHING from there.

      The other option I was thinking of doing was once I get all the 4TB drives to make the new RAID 6 array, I was thinking of using one of the 4TB disks (Only have about 1.5-2TB of data on the current RAID6 array) as temp storage and use rsync to replicate/backup the data locally, blast away the current RAID 6 array, create the new RAID 6 array, and rsync back, then add the lone 4TB drive into the array.

      I did see something about using about tune2fs to change the new UUID(s) to be the same as the old UUID(s). Not sure how I would go about doing that. I'm sure if I use enough Google Fu I'll eventually find it.

      The post was edited 1 time, last by ParadingLunatic: Added info about using LVM ().

    • Why not use raid10?

      Sent from my phone
      omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
      SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
      PSU: Silencer 760 Watt ATX Power Supply
      IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
      OS on 2×120 SSD in RAID-1 |
      DATA: 3x3T| 4x2T | 2x1T
    • vl1969 wrote:

      Why not use raid10?

      Sent from my phone
      Well I was mainly going to have 5 drives. Raid 10 requires an even # of disks. I guess I could always build it with 4 and keep the 5th around as a spare. The whole reasoning for using 5 disks is Raid 6 requires at least 4 disks. I was going to build the RAID array with 4 disks, and have one disk (the fifth) being used as a temporary data/backup source, then add the 5th disk. I'll still have a two disk failure capability. I haven't really used RAID 10 much so I'm not totally aware of why I would use RAID 10 over RAID 6 with the exception of increased read/write performance over RAID 6. I'm not really concerned with read/write performance as the data stored is mainly audio/video and will really only have 2 devices max hitting it at a time.

      Honestly I was also debating on using SnapRAID as well since this seems to be the perfect time for me to change my storage. Currently I'm using LVM and don't think I really need it. I've never used SnapRAID before and right now, it's completely foreign to me on how to use it.

      Mainly I'm most concerned about stability (able to lose drives) over read/write performance.
    • We'll snapraid is a bit different animal all together.
      The biggest difference is that it is not real time. It works for me but your needs might be different

      Sent from my phone
      omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
      SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
      PSU: Silencer 760 Watt ATX Power Supply
      IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
      OS on 2×120 SSD in RAID-1 |
      DATA: 3x3T| 4x2T | 2x1T