Solved? OMV and software raid 5

    • Offizieller Beitrag

    Might surprise you but this here is a forum. I'm not into conversations but if it's necessary to correct outdated information (or feelings/assumptions based on that encouraging other users doing bad things) I will simply do

    So that's what this is,,, a forum.. I was wondering about that... As for the rest, I'm sure it will be entertaining or, at least, mildly amusing. :)

  • Hi guys!
    Well, I have rebuilt my server with my 4 x 4 TB drives and decided to go for mirror pool combinations...
    The reason is mostly safety and performance it is easy to add more drives to to add to the pool at a later date...I have room for 4 more drives at a later date and connect them via a controller card as discussed before.


    I have saved my old raid5 array as is at the moment so that I can add it to an OMV box later....
    I have not removed the info from fstab but merely commented out the information for the time being.


    After installing zfs on my work server and after adding my drives to the system I did the following to test the theory of adding extra drives to a mirror at a later date...I first created on mirror with two drives using disk by-id


    To obtain that info I ran the:

    Code
    ls -l /dev/disk/by-id/*

    The info I obtained which is important. I have added the position of the drives in my actual server for ease of reference at a later date...

    Code
    lrwxrwxrwx 1 root root  9 nov  9 11:06 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1LC0P4D -> ../../sdf (Disk3 in server)
    lrwxrwxrwx 1 root root  9 nov  9 11:06 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K2ZE85Z5 -> ../../sde (Disk2 in server)
    lrwxrwxrwx 1 root root  9 nov  9 11:06 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6SZF1AT -> ../../sda (Disk4 in server)
    lrwxrwxrwx 1 root root  9 nov  9 11:06 /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7JZSK7Z -> ../../sdd (Disk1 in server)

    Yes, I have a drive as sda that really should have been my system disk...just forgot to remove my old array while installing Debian 9 and it came up on installation as the system on sdc....couldn't be bothered to restart the process so that is why you see the above:


    To create my first mirror with 2 drives I did the following...(note* you don't have to create gpt tables on the drives first...this is done automatically)


    Code
    root@DN-Server:/home/martyn# zpool create -f DN_ServerPool -o ashift=12 mirror /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7JZSK7Z  ata-WDC_WD40EFRX-68N32N0_WD-WCC7K2ZE85Z5

    To check the status and see it is on line:


    Now I wanted to add another mirror to my pool by-id as before:


    Code
    root@DN-Server:/home/martyn# zpool add -f DN_ServerPool -o ashift=12 mirror /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1LC0P4D  ata-WDC_WD40EFRX-68N32N0_WD-WCC7K6SZF1AT

    This added my other two drives to my pool and now the status for that is:



    That is the state of play at the moment.


    bookie56

  • There isn't much to gain from compression if most of your media is already heavily compressed, which is usually the case for media files, movies, audio, photos...


    For me, it's a level of complexity for very little (if any) gain.

  • There isn't much to gain from compression if most of your media is already heavily compressed, which is usually the case for media files, movies, audio, photos...

    That's why @cabrio_leo pointed out that lz4 should be used since then the compression module checks whether further space savings are possible. BTW: when using compress setting with btrfs it's already like this (active check for already compressed contents and skipping compression in that case based on the few first blocks of data), one would've to use compress-force to overwrite this and get 'always compress even if useless' behaviour.


    For me, it's a level of complexity for very little (if any) gain.


    I would agree if we would talk about applying compression externally to data. But with filesystems that were designed in this century and not the last (talking about ZFS and btrfs on Linux here in contrast to ext4, XFS and so on) there exists not a single reason to skip compression since this feature is integral part of the filesystem (same with checksums or dedup capabilities -- these filesystems have been designed with such features in mind).


    Using compression=lz4 with ZFS and compress=lzo with btrfs are the best defaults possible.

  • Hi guys!
    I wrote in my last post that I had created two mirrors with 4 x 4 TB drives....
    On this work server as I have pointed out before I use Clonezilla Server Edition to clone my computers and customer computers...
    Steven Shiau the creator of Clonezilla has set it up to use to be in your home directory /home/partimag...


    Now since removing my old raid array I have an empty directory where I had my Clonezilla Images before...


    To rectify that I created a file system in my pool as follows:

    Code
    # zfs create DN_ServerPool/partimag

    I then mounted it in the old /home/partimag directory:


    Code
    # zfs set mountpoint=/home/partimag DN_ServerPool/partimag


    https://sourceforge.net/u/userid-704891/After running zpool status:

    And df -h:




    I see from the above that I now have a file system called partimag and is mounted in /home/partimag just like my old raid5.


    I have just add a couple of changes to make sure things are as I want them:

    Code
    # zfs set atime=off DN_ServerPool

    The above disables access time recording and should increase disk performance.

    Code
    # dedup=0ff

    Wanted to make sure that was disabled....



    Finally I activated compression... now I have compression in and out of my "partimag"

    Code
    # zfs set compression=lz4 DN_ServerPool/partimag

    Now to run Clonezilla with the compression on and then off on an image from a customer computer.


    I will post the findings here...


    bookie56

  • The reason for me to use ZFS was the ability to detect a bit rot or a defective sector through check summing and to correct them by "self healing". Other things are the possibility for snapshot and so on. There are other file system available with similar possibilities, but my personal decision was to use ZFS.


    But after a number of month of usage I have to conclude that the integration in OMV is only rudimentary. A lot of things must be done by CLI. Especially all higher sophisticated tasks. The ZFS plugin supports only some basic ZFS tasks. E.g. to create snapshots automatically some third party tools must be used.
    Therefore it is necessary to familiarise oneself with ZFS knowledge. ZFS in OMV is nothing for beginners.


    Anyway after a while things were performing well for me and I had no problems at all with the limited features of the ZFS plugin.


    But there are some major drawbacks that can be critical if one didn´t know them:

    • Drives which are already used as a "zfs member" are still shown in several selection lists in OMV and in the ZFS plugin itself. E.g. a ZFS disk is shown in the "create filesystem" dialog in OMV.
    • But the biggest drawback is that it must be avoided in any case to export a pool if a shared folder is already created on the ZFS pool. If this is done the device entry goes to "n/a" and doesn´t change anymore even the pool is imported again later. All things related to this shared folder have then to be created again (SMB/CIFS share, USBbackup jobs and so on).


      There are several threads here with the question how this can be fixed. All without a solution.


      (This situation is aggravated by the fact that there is no possibility at all to save the configuration of OMV settings. Every setting must be documented manually. But this is another story.)

    Conclusion: The usage of ZFS can be recommended together with the statement: Do not export your pool, except you want to transfer it to another system!

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

    • Offizieller Beitrag

    Hi guys!
    Well, I have rebuilt my server with my 4 x 4 TB drives and decided to go for mirror pool combinations...
    The reason is mostly safety and performance it is easy to add more drives to to add to the pool at a later date...I have room for 4 more drives at a later date and connect them via a controller card as discussed before.


    Before I adopted ZFS, In looked into some potential failure modes:
    My understanding of multiple vdev's, (2 or more) in a zpool, is that the pool strips member vdev's in a manner similar to RAID 0. That would mean the loss of a single vdev (1 mirror) means losing the entire pool. Would that be rare? I imagine it would be but, if a vdev is lost, the chance of recovering the pool or any of its' data is nearly non-existant. By extension, adding additional vdev's to the pool increases risk.
    _____________________________


    If performance is a driving consideration, there may be another factor to consider. Unless you're doing client image builds directly at your storage server (hot swap bay / usb3 drive dock), your performance limitation might be your network link, which is likely to be 1GB. So, if you're imaging or rebuilding over your network, storage performance is bottle-necked. RAID 0 (like) performance is fast locally but, without trunked network paths, 1GBPS is the upper limit of your net connection. (Actual throughput is likely to be less.)


    Assuming you are imaging/rebuilding client packs over your network from a storage server, I might consider using unionfs at the top of separate mirrors. In that case, if you lost one mirror, the others survive with their data intact. Further, as a convenience feature, unionfs provides a common mount point and allows for future expansion. It will add partitions, disks, arrays and pools, individually, while using different file formats. (While I've done some tests of unionfs, it was nothing extensive. If you decide to try it, test it in your scenario.)


    However, to balance the picture, using unionfs would be the equivalent of inserting an lvm layer above ZFS. That adds complexity and a bit of admin over head in monitoring unionfs and the effects of it's current storage policy. As it is with most things, it's a matter of trade offs.
    ____________________________


    This is just offered as thoughts on the matter that you may already be aware of. In any case, with solid backup that you trust, the pool risk is no big deal.

  • Hi flmaxey :)
    I thank you for your comments and will look into unionfs...


    As far as my two mirrors instead of raidz2....most of the info points to the better solution being the mirrors....


    Like I said...no expert at this as it has been pointed out but have read a lot on the subject and think I can accept the mirror solution....PLUS my backups....;)


    I was not impressed with compression in the situation with Clonezilla because the files are already compressed as a matter of course...so expecting even more compression is not that viable....NOT I point out with the settings I have had...


    I created two images one with compression and one without and could not see any difference...


    bookie56

    • Offizieller Beitrag

    With a ZFS mirror running for a few months, I hardly qualify as an expert. (The NOOB designation applies... :) )


    Where ZFS details are concerned, I've been looking at tips from this forum and ZFS pages, then running down the details. If one person or organization makes a claim, I note it. If several persons and organizations make a claim, I tend to believe it. (Subject to common sense, of course. "Group think" is out of control these days.)



    A ZFS mirror seems to be among the safest storage techniques available at our level. While duplicating disk real-estate can be painful, (cost) it was an excellent choice. And while losing a vdev and then the entire pool would be a disaster, if faced with your scenario, I might have gone the same route you laid out. (Disasters are what backup is for.)



    unionfs was just a note on an alternative, "if" you're building drives over a network. It was just food for thought.
    In general, unionfs strikes me as a "patch" for poor planning, or something in the tool box for an admin who takes over a server farm, from another admin who was careless. If building from scratch, I can't envision a scenario where I'd use it.




    The only thing I'd keep closely in mind is the last post from cabrio_leo , above, about exporting a referenced ZFS pool. Some of those OMV/ZFS caveats could produce ugly situations.



    Oh, and I turned on lz4. I don't expect to much (if anything) from it but, again, the cumulative errata suggests it won't hurt anything and there won't be a noticeable performance hit.

  • I was not impressed with compression in the situation with Clonezilla because the files are already compressed as a matter of course...

    How you could benefit from compression with your very own use case: Which Sata Card? (compression at the filesystem level + snapshots if the use case implies that you image the same machine more than once. Then space savings are massive).


    My understanding of multiple vdev's, (2 or more) in a zpool, is that the pool strips member vdev's in a manner similar to RAID 0. That would mean the loss of a single vdev (1 mirror) means losing the entire pool. Would that be rare? I imagine it would be but, if a vdev is lost, the chance of recovering the pool or any of its' data is nearly non-existant. By extension, adding additional vdev's to the pool increases risk.

    No. It's explained here: Mirrored vdevs in one large zpool


    Now before you start to complain that I interfere with your personal conversation with someone else (and the 'oversized ego' BS, personal insults and your usual stuff)... this is a forum: the best way to use a forum is not to ask the same questions again and again but to learn from already answered questions. That implies that in the forum something like peer review is happening: if something's misleading or even wrong, a quick note by someone else might be a good reaction. That just happened. If you want to avoid this and keep your conversations private then please simply do that (PNs exist here).

    • Offizieller Beitrag

    No. It's explained here: Mirrored vdevs in one large zpool


    I read that link, in it's entirety, and others like it and I believe my understanding is correct.


    Lose a vdev, lose the pool. It doesn't matter if single vdev members are single disks, individual 2 disk mirrors, or raidZ1, 2, 3 arrays. Admittedly, it would take a lax admin to let all disks is an array vdev die before acting, but a mirror or even an array is functionally a single disk - not unlike a pool of vdev's that are single disks. I might be wrong about this, but it's something I can check in a VM. (Add 3 individual mirrors to a pool, and kill a mirror.)


    After taking a another look; here's a direct quote from the link:
    Be careful here. Keep in mind that if any single vdev fails, the entire pool fails with it. There is no fault tolerance at the pool level, only at the individual vdev level! So if you create a pool with single disk vdevs, any failure will bring the whole pool down.
    ____________________________________________________________



    Notice how the above was done factually and without the slightest hint of condescension?
    The response was completely devoid of:
    Sorry, can't have bad information, habits, encouraging users to do bad things, on the forum.
    Maybe we can learn something here....



    On the forum:
    This might come as a shock to you but, brace yourself, I'm not the only one you've managed to rub the wrong way with abrasive rhetoric. Trust me on that. And I'm actually saying this in a friendly manner when I invite you to check out this link. -> Here I know you won't, but you need to, and it probably take you another 20+ years to realize it.
    (But, please, don't believe me. Ask your forum peers,, and ask them for an honest answer.)


    In tech subjects, you could be right as rain but few actually hear you because the delivery is often with a sledge hammer and 60 grit sand paper. Along those lines, I'll give you a couple examples where you're notorious for "your help" or for correcting "bad information".


    When someone's mdadm RAID array dies, the loss of their data is painful enough without your giving them a lecture about how "stupid it is to use anachronistic RAID in modern times", with the thinly veiled implication that the user is stupid by extension. This forum is replete with several examples of this. What's wrong with making at least a halfhearted attempt to help them with a link and an attempt to "guide" them (not pound them) toward a better solution?


    (*Hint* If you decide to go that route, on your way toward acquiring people skills,, lose the word "stupid". Generally, those who favor the word have a lot of short comings of their own.)


    And then there's your stance on R-PI's. "I get it." It 's not a hot rod platform for OMV and, in the world of SBC's, the R-PI is a Yugo among Ferrari's. The crux of that matter is, so what? The average Joe is not an SBC expert. In the beginning, most simply blunder into the world of SBC's. After they buy an R-PI, they want to do something with it - not be told about what a "stupid" decision they made. If they need something better, that realization will come on it's own in the fullness of time.
    In any case, the R-PI is the MOST popular SBC, bar none, and there's nothing you can do, as an individual on this forum or anywhere else, that will change that. The R-PI marketing campaign is way bigger than you are, with an unbelievable number of R-PI's already in public hands. On the other hand, attracting R-PI owners to OMV will increase the user base, help spread the word about OMV, donations increase in the process, all of which helps to strengthen the project. Ergo, OMV's future is brighter with R-PI support. You saw the numbers on SourceForge. Roughly a 1/3 of OMV users are R-PI owners. Dump the R-PI, and a LOT of users go elsewhere. It's that simple. If you don't care about that, you don't care about the project.


    And while you may not believe it, you're not always right. From your prospective, it may seem that way but you're choosing to look at things from a single prospective, your own. For a change, you might want to try to step out of your own shoes and look at things from the prospective of others. The view will be astonishingly different.


    A forum of this type is about OMV tech assistance, primarily, with tech education as an obvious secondary roll. It's about assisting long time OMV users with tech issues and attracting new users - not having an abrasive moderator drive them away. Do you have tech skills? Yes, I'll grant you that. From what I can glean, you have excellent open source Dev skills, extensive knowledge of SBC hardware (down to the minute details associated with various models) and strong knowledge of electronics in general. You are, without a single doubt, a great multi-talented asset. But people skills,, which are every bit as important? Well, in this instance, I'll be kind.


    In simple terms: Help someone on the forum, with a virtual smile; you'll get the appropriate thanks and, hopefully, OMV gets a donation. Act like a pompous a~~ and it doesn't matter how good you are, push back is inevitable.

  • Hi guys!
    Omce again one of my threads has become an exercise in right and wrong behaviour on a forum... I am not going to say more than...


    All help how it is intended is only as good as the delivery....I can moan about having time to do things and have been an ass stating that....


    Those that know me know that when I am doing things I will give concrete examples using code to show how things work in the terminal BECAUSE at the end of the day that is what many want but don't want to appear s***** saying it on a forum...


    Showing practical examples give people something to work with...of course there are a number of things that could go wrong with my practical examples on someone else's computer depending on what is installed etc.


    I am tying to build upp knowledge from lots of threads where many don't ever give practical working examples to show their progress...and that is so disconcerting!!!



    I made references to some videos about exporting a pool and importing it again because the guy that did the video did it that way...YES I understand the ways that can be useful as pointed out but in this case it was all I had to work with until I starting seriously reading Help for zfs and then I could show by examples creating a pool with disk id's and not needing to export it and import it to do just that....


    I will continue testing zfs but I thank tkaiser for showing me that it has its worth.....


    Please let us leave out the comments about each other's way of doing things it makes for a very cluttered thread and harder for anyone to follow...YES I take responsibility for starting this in the first place but now want it to stop...PLEASE!


    bookie56

  • I made references to some videos about exporting a pool and importing it again because the guy that did the video did it that way...

    @bookie56 Could you please post that link to the videos, if it is still available? Thanks.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • There is no fault tolerance at the pool level, only at the individual vdev level! So if you create a pool with single disk vdevs, any failure will bring the whole pool down.

    Yes, for my opinion this is right. In my ZFS setup I have decided to go for a striped solution with 2 RAIDZ1 vdevs, each of 3 disks. Each vdev tolerates a failure of 1 disk. If one vdev has 2 failed disks the whole pool will fail. Therefore a backup is essential. :)


    Btw: In the past I have extended my pool from 4 to 6 disks and restructured it from a mirrored to the striped configuration. The 50% storage efficiency of the zmirror out of 6 disks was not sufficiently for me. Before I had already tested the complete restore process with a single test disk, to be sure not to get some nasty surprises later. Sometimes I get the impression, also in my personal environment, that people rely on their backup procedure without ever trying the necessary steps to restore there data.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    Yes, for my opinion this is right. In my ZFS setup I have decided to go for a striped solution with 2 RAIDZ1 vdevs, each of 3 disks. Each vdev tolerates a failure of 1 disk. If one vdev has 2 failed disks the whole pool will fail. Therefore a backup is essential. :)
    Btw: In the past I have extended my pool from 4 to 6 disks and restructured it from a mirrored to the striped configuration. The 50% storage efficiency of the zmirror out of 6 disks was not sufficiently for me. Before I had already tested the complete restore process with a single test disk, to be sure not to get some nasty surprises later. Sometimes I get the impression, also in my personal environment, that people rely on their backup procedure without ever trying the necessary steps to restore there data.

    That particular quote in black text above, (excerpt - no fault tolerance at the pool level), is not my own. It came, word for word, from the above referenced web site, ZFS.


    Really, it makes sense. At the vdev level, ZFS file system functions are in control. At the pool level, it appears that the ZFS equivalent of LVM is in control. It seems to be a logical division.
    __________________________________________


    When it comes to performing backups, I believe the vast majority of users (even some admins) don't bother. Generally, the "need for backup" lesson is learned once and it tends to be quite painful. On occasion, as you noted, a second even more painful lesson comes along when the restoration of a backup fails. Testing is key.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!