RAID progression

    • Offizieller Beitrag

    My understanding is that ZFS has no way to expand a drive pool/array

    You create another vdev within the pool, I think, I have a Raid-Z1 I can expand that but it's a vdev, TBH I'm getting back into this after using Xigmanas (Nas4Free) for many years with no real issues.


    But I agree with what you're saying about expandability with Raid, you mention BTRFS, well there were two raid issues on here recently, one was BTRFS and the other XFS both on top of mdadm. Both used different command line checking and recovery, whilst they worked mdadm does not know about the underlying file system. So doing an fsck /dev/mdX just didn't work.


    It reminded me of some research I did in respect of Synology and/or Qnap, they use mdadm to create the array, then create an LVM then format that LMV with BTRFS


    But for media storage unionfilesystem and SnapRaid are the suggested way to go as the data is not being changed/accessed regularly + they be expanded and drives replaced easily.


    Actually I've been thinking about UKenGB questions and your partitioning idea, so could you use a drive to start as a single drive, add a second, drive format it using the same file system as the first, then from the cli mdadm --create and create a Raid 1 via the cli. Not sure if that would work or if OMV would recognise it, and could you do the same and 'convert' the Raid 1 to Raid 5 via cli after first unmounting the Raid 1.

    • Offizieller Beitrag

    And this then shows up in the GUI - will be interesting to see what it shows at the end - especially when i add the 2nd partitions to the 8TB drives and put them into a RAID1 array

    :thumbup::thumbup: nifty idea, but reading through your post this should only be done if you know what you're doing, this is why OMV works on KIS and uses the block device approach as new users to the world of NAS believe they should be using Raid

  • I answered your questions from your first post, if that is sufficient for you then I fail to understand the relevance of your post above. As far as I am concerned I have no problem if a user wants to run a raid system it's all down to personal choice.

    We're not getting on here are we. Likewise, I fail to understand the relevance of your post, since your answers to my first post is simply "RAID is no good" and now I ask what is your suggested alternative, you don't answer that and merely state you already answered my initial questions and sneeringly denigrate my follow-up request for some additional information.


    I'm sure geaves you are very knowledgeable, but so am I. Just not in the same field obviously. I came here to ask reasonable questions and hoped to receive reasonable and helpful replies and I have to say, neither have been forthcoming.


    I will look further into unionfs and Snapraid that I had previously considered prior to asking here about actual RAID. Ultimately, I will make my own decision as you say, but I will obviously need to conduct my research elsewhere.


    Good day to you all.

  • You did see my answers and some of the pitfalls. ??


    If you are not knowledgeable (and comfortable) with CLI stuff with RAID and Linux disks in general then what you are proposing is a bad idea in the short term - but i did give you an approach to try if you wish.


    Remember everyone on here (mods) are all volunteers and "probably" see the same questions over and over again and are just trying to save you from potential pain further down the path.


    Craig

  • Here you go - screenshot of the RAID screen after all up and running


    6 x Drives in RAID5 = 4 x 6TB and 2 x 8TB


    Manually (CLI) setup the array after manually partitioning the drives with GDISK and SGDISK


    Create a single RAID array with 6 x 6TB partitions (the first partition on each drive - so /dev/sda1 /dev/sdb1 etc


    Then after waiting for that to finish building and adding a Filesystem - went into the CLI again and created a RAID1 with the remaining 2 x 2TB partitions on the 8TB drives.


    • Offizieller Beitrag

    Here you go - screenshot of the RAID screen after all up and running

    :thumbup: I always assumed it was possible and OMV would recognise it, just never seen a post on the forum.


    A user mentioned hybrid raid very briefly, but this seems to be in the realms of Synology and is designed to help non tech users protect their data from hardware failure.


    I was researching Synology and Qnap and their implementation of Raid as Synology use BTRFS, but they create an mdadm Raid, then an LVM, then BTRFS on top of the LVM.


    Having had two users with similar issues with their Raid setup's except one was using BTRFS and one was using XFS, in both cases you had to use the 'tools' from the cli in respect of the file system.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!