Need advice. Grow raid or LVM

  • Hi, I really need some advice on how to reorganize my disk space.
    I have a quite old OMV installation, always updated.
    I had 4*3tb Raid5.
    Then I added a 2*8tb Raid1.
    Having two different raid is a PITA though because I have one of them filled up and the other with plenty of space and I thought to switch to lvm, creating a lvm which included two raid. So I move some data around I freed up my Rais1 and I’m ready to start.
    Reading the forum I saw some of you discouraging the use of LVM.
    So what do you think is better to have a unique data space?
    Create only one raid1 device with the 8tb drives and adding all the 3tb devices two at a time?
    Is it possible to have such a configuration or I need all drives to be the same?


    I would really like to know what do you think is the best solution.
    Thanks.

  • Create only one raid1 device with the 8tb drives and adding all the 3tb devices two at a time?
    Is it possible to have such a configuration or I need all drives to be the same?

    What type of Raid do you want to create out of this disks? Yes, it is possible to use disks of different size for Raid but not recommended. It depends of the Raid level. Eg. for a Raid 5 the smallest disk / partition limits the space which can be used from the bigger drives.
    You can have a look at the OMV-unionfilesystem plugin. But I am not sure of this plugin works only for single drives or also with several Raid systems. Maybe Snapraid is another possibility.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Thanks for the answer.
    Maybe it's better to switch to ZFS? I used it on another file server and AFAIR is simpler to add couples of different disks (mirror) to grow the available space. I'm I right?
    I have a doubt about the RAM requirements of zfs: the available disk space will be 13 TB more or less and I only have 8Gb of RAM (not expandable).
    Is ZFS a no go with so little RAM and so much disk space?
    And is zfs reasonable stable in OMV?


    Thanks again

  • I used it on another file server and AFAIR is simpler to add couples of different disks (mirror) to grow the available space. I'm I right?

    The situation is quite similar to classic RAID. It is possible to grow the pool by changing disk by disk (resilvering) with a bigger one. But right now it is not possible to add new drives to an existing pool, more precisely to existing vdevs, at least in ZFS on Linux. It is possible to create another vdev out of the new disks and to add it to the existing vdev(s). New data is then striped along the old and new vdev. Basically it is possible to mix different types of vdevs.


    I have a doubt about the RAM requirements of zfs: the available disk space will be 13 TB more or less and I only have 8Gb of RAM (not expandable).
    Is ZFS a no go with so little RAM and so much disk space?

    If you do not use ZFS deduplication this amount of ram should be sufficient enough in OMV. There are other NAS distributions which make use of ZFS heavily where more than 8GB Ram is necessary.


    And is zfs reasonable stable in OMV?

    I am using ZFS since 1 1/2 year without major issues. I let my zpool disks spindown after 20 minutes without any problems, although it is not recommended. The only problem I have encountered is with an export of the pool after creating shared folders and samba shares etc. Then all references got lost and all shares etc. have to be created newly. Maybe this was fixes meanwhile in OMV 4.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    Maybe it's better to switch to ZFS? I used it on another file server and AFAIR is simpler to add couples of different disks (mirror) to grow the available space. I'm I right?

    Yes, you can do this. (Using the ZFS plugin)
    I've done what you've outlined above, before, but I just tested it in a VM to confirm the process.


    1. Create a pool, using 2 disks in a mirror which will make up the first vdev. (The 8TB pair?)
    2. At this point, you could copy your data. Once it's moved to the ZFS mirror, wipe the original 3TB drives.
    (Then)
    3. Click on the pool, then the "Expand" button.
    4. The drive dialog comes up. Add another 2 drive mirror which becomes the second vdev. (Repeat for the second pair of 3TB drives, etc.)
    ___________________________________________________


    On the other hand, have you thought about using the older drives for backup? If your data store is this large, it would be painful to lose it. ZFS is good, but it's not backup.
    __________________________________________________



    I have a doubt about the RAM requirements of zfs: the available disk space will be 13 TB more or less and I only have 8Gb of RAM (not expandable)

    On a backup server, I'm running a 4TB mirror with 4GB ram. There have been no issues. Without deduplication, you should be fine.

  • On the other hand, have you thought about using the older drives for backup? If your data store is this large, it would be painful to lose it. ZFS is good, but it's not backup.

    Yes. I have a local and an offsite backup of the important data, but I need all the space of the 6 drives...

    1. Create a pool, using 2 disks in a mirror which will make up the first vdev. (The 8TB pair?)
    2. At this point, you could copy your data. Once it's moved to the ZFS mirror, wipe the original 3TB drives.
    (Then)
    3. Click on the pool, then the "Expand" button.
    4. The drive dialog comes up. Add another 2 drive mirror which becomes the second vdev. (Repeat for the second pair of 3TB drives, etc.)

    That was my plan too and I'm now moving data to the zfs pool
    I noticed an increased i/o wait value since I created the zfs pool. Is it normal?

    • Offizieller Beitrag

    I noticed an increased i/o wait value since I created the zfs pool. Is it normal?

    Given what it does, ZFS has a bit of overhead and, in some measures, it's not going to be as fast as EXT4. Where are you getting the i/o wait stat?


    Also of note is that, using a single mirror (unstriped), you're not going to get the parallel read / write throughput equivalent of RAID5. A single mirror has the equivalent throughput of a single disk. Throughput may improve as you add mirrors, as they will be striped into the pool.

  • Given what it does, ZFS has a bit of overhead and, in some measures, it's not going to be as fast as EXT4. Where are you getting the i/o wait stat?

    I get it from top, iotop, and the graphs on the OMV web interfaces.
    I had similar i/o wait stats back when I had crashplan running, but since i got rid of it everything was smoother.
    Now I'm syncing a lot of data from the RAID5 to the new zfs pool.
    After my NAS storage will be stable I will focus on i/o performance.

    • Offizieller Beitrag

    On the same box:


    There's going to be a lot of caching involved in that scenario - read from mdadm RAID5 (parallel i/o) and writing to the equivalent of a single disk (ZFS). There's a more than a bit of i/o disparity involved. I suspect it will smooth out when the copy is finished.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!