Solved? OMV and software raid 5

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • flmaxey:
      Sorry...please read your quote...seemed to have added my info there....;)

      flmaxey wrote:

      Regarding: "OMV on 7 computers being used for storage": (Are these repurposed consumer PC's?)

      This is the crux of the matter. I imagine that these boxes are the source/destination of all your client clone copies. Based on an earlier post, I'm guessing that the clone copies are organized alphabetically..??

      Here are a few questions for you:

      1. What is the total of your storage requirement, presently?
      3. What is the data to be stored, primarily? (Factory default drive clones? Rough size of each?)
      4. How many storage drives are in each OMV server?
      5. What file system are you using?
      6. Memory in the OMV boxes?
      (While I don't know the parameters, or their reasons for suggesting it, ZFS pro's suggest 1GB RAM per TB of storage but that's in a file server / server farm application. (Which equates to a LOT of traffic with concurrent users.)
      Without "file deduplication", the ZFS on Linux project recommends a minimum of 2GB. (See Hardware) In your scenario, with OMV, I think you'll be fine with 4GB ram. OMV will work with 1GB (on Raspberry PI's and other SMB's) In your scenario, that would leave the remaining 3GB for ZFS.

      Finally, how critical do you see this data to be? Meaning, if you lost some or all of it, what would be the consequence? (I imagine there may be "classes" involved. ?TB is critical, where ?TB is important, but not critical.)
      ______________________________________________________________
      Hi again!
      Wow you really have a two penneth worth of info here.....


      After reading everything and what others have said I probably don't need a raid scenario on my Work Server. I just need a way to pool the drives so that what I save on them is saved alphabetically...ie I don't need to move them around when one letter has more images than another....


      BTW I do clone all my computers....and have backups of those clones....;)


      Because I am a one man band as it goes....;) I utilise everything I can in the way of drives and storage space...If a customer decides to go from a work station to a laptop and doesn't want to do anything with the old computer I then use it (if it is worth having ) as a storage vessel..


      I will add that if I reuse such a computer I check the motherboard for components that look like they are swelling....;) I redo the thermal paste which I usually replace on all computers after a couple of years... I even replace the processor cooling system if I have somthing available...


      I am good at what I do building computers and repairing them...but my real expertise area is problem solving....


      I am always honest with my customers and say that if I don't have the particular solution to their problem I will find it....(if at all possible...and most of the usual problems I already aware of how to solve them)


      No one person can be good at everything....


      I hate seeing customers with purpose computers for sale and have the nerve to call them games computers....just had a new guy on the block claiming he sells games computers and then proceeds to use a Corsair CX450 as a power source.....I even see compromises from known brands...Not condemning Corsair....but never use a budget PSU in a games computer....


      Those that know me know that a games computer is something worth building well and not something one should compromise on....if you have a small budget...then no games computer....;)
      Your questions:
      1. I have 8TB in a 4 x 2TB software raid 5 and that space is on its limit....I was hoping to do a big jump in size so that I can concentrate (albeit disk failures) on other things...
      I have just bought 5 x 4 TB WD Red (not all from the same place) for my work server....I recently upgraded my VMWare from 1TB to 2TB so that is OK at the moment...
      The computer that I back my clonezilla images on is a purpose built computer by me.... I have a tendency to move computers when building new ones..the computer I am sitting at at the moment is my company one and is my latest build with good components....my clonezilla image backup computer is an old company one....;)


      2. When I buy in a new computer...most often nowadays a laptop that my customer has chosen....I start it set it up for them and install their software, Microsoft Office...if they have one....setup a web mail client... antivirus protection etc and then create a purpose image of that computer...Now it depends on the computer..if it has a SSD or standard drive around 256 GB - then I do a complete image with recovery partitions (if available)...but on larger drives I just save parts so that I can restore their computers....the images vary in size up to about 37GB...so ZFS compression would save me space...I will add that my customers sign an agreement to the effect that I tell them what I have saved.... I also store syspreped wims etc on the clonezilla backup server...
      I always have up to date versions of an OS so that if a computer comes in with just so much bluff software in use (customers have downloaded by mistake) and I can see that there is several different proplems then I usually recommend a restore....I used to restore to factory defaults but more ofthen than not it is just a waste of time so I restore to their OS and activate their license....


      3. The amount of drives in each OMV server varies on what I have.... but most have at least 3 to 4 TB and are sometimes a backup of a backup.....The clonezilla server backup computer I am upgrading with extra 3TB drives....I have 3x3TB at the moment and have two more available...


      4. All my computers apart from my Windows ones are using ext4..


      5. The memory in the OMV computers is all standard..sorry some have memory with heatsinks...I never build even a standard computer with memory without heatsinks....All have at least 4GB but several are using DDR2 6400. I would not be using dedup....


      If I lose a customer clone it means I have to start from scratch...but like I said I usually have updated wims of all OS from XP to Windows 10...so the clones save me time....but not the end of the world crisis....


      At the moment I am using the web user interface for my OMV computers and have everything saved as favourites in Firefox, Chrome, IE....I also use puttey. Puttey is setup for all my computers and I can quickly access them via my company computer...

      flmaxey wrote:

      Really, as it seems from the thread, your considerations may to be more about data organization than anything else. Of course getting the right storage structure would help, and go a long way toward preventing a potential disaster.
      Data organisation is easier if you have drives pooled as one otherwise you have to allocate one drive for a - f another for g - k etc but the risk is you will always have more of one letter than another.....so pooling drives saves this problem....Another thing is maybe a - f has 20GB left but the image you want to save is 35GB....;)

      tkaiser has opened my eyes to the need to restructure and ext 4 doesn't give the advantages of ZFS....


      As you can imagine this undertaking is going to take time and I don't want to have to redo things for a while....I always have a new backup of my system drives on all computers if I have upgraded the OD....so I can reset in case of problems.....

      I am better at maintaining my hardware than software....files and need to improve on that.....BUT that doesn't mean I haven't got backups of all important files....

      bookie56
    • I'm out the door again, for a few days. I'll get back to you, on return.
      Until then:
      ________________________________________________________

      Repurposing older PC hardware - it's what I'm doing these days to get around buying new hardware on a regular basis. (I see nothing wrong with re-furb's. :D ) While I enjoy a PC game, from time to time, nearly any PC built in the last 10 years will run the older games. (C&C and others.)

      On a side note regarding "gaming computers";
      I'm stunned at both the realism in today's games and the beef in the hardware required to run them. GPU's are rivaling CPU's for processing power... It's simply amazing, and so are the costs. ($400+ GPU's .......)

      If I were you, I'd do the same thing for building file servers. Hardware laying around? Install OMV and use it. Outside of a data center (where ECC is a real requirement), consumer PC's make fine file servers.

      On the fix and repair thing - I've found in both commercial and consumer electronics, the majority of failures are power supplies or, otherwise, are power related. (I'm sure you're more than aware of that.) Beyond power issues, in PC's where hardware/software interaction can cause problems, things can get real interesting.
      (While I should drop weird issues and move on, I've spent many hours puzzling out "what happened".)
      ______________________________________________________

      Some thoughts on using any pooling technique:
      The convenience associated with using pooling techniques can be far offset by the agony of trying to recover from a failure. If trying to recover a single disk, in most non-CoW disk formats, there are plenty of utilities out there. When trying to recover a disk that was part of a pool, well, you "might" find something or you might not. In this respect mergerFS shines, because pooled disks are independent with their file systems intact, below the pool. In essence, you can bust a mergerFS pool and everything is still there. (But it might be a job to sort it all out, in finding where mergerFS stashed folders and files.)
      So, is it better to reorganize your data, or become dependent on a pooling technique, or create some combination of the two? I believe that's the question.
      ((With the above noted, with solid regularly scheduled backup, many potential recovery nightmares simply fade away. With solid backup, you can pool to your hearts content.))

      As noted in the last post, I believe you're in a use case that fits a (ZFS) RAID scenario. From there, it would be a matter of the size of vdev's and disk sizes you're comfortable with. Personally, until they've been vetted over more time, I wouldn't use 8TB drives, period. I see 6TB as the absolute upper limit with 4 to 2TB being preferred. Why? As drives get larger, failure is more likely. Trusting a huge drive to that much data becomes a risk / benefit trade off that I don't think is worth taking. (But that's simply my opinion.)

      On the organization issue:
      Sorting images alphabetically is one way to organize. Have you given thought to organizing images by year, with monthly sub-dirs? (You could search on customer names, if the set the customer name as part of the file image name.) Such an approach would also give you an indication of when you could purge old data. That scheme could be further divided by image type (Win8, Win7, Vista, etc.) In any case, if you can come up with a better way to divide up the data store, that's logical and intuitive to you, the benefits are obvious. Just some thoughts.

      I'll look closer at your answers and numbers when I get back.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • bookie56 wrote:

      1. I have 8TB in a 4 x 2TB software raid 5 and that space is on its limit....I was hoping to do a big jump in size so that I can concentrate (albeit disk failures) on other things...I have just bought 5 x 4 TB WD Red (not all from the same place) for my work server....I recently upgraded my VMWare from 1TB to 2TB so that is OK at the moment...
      The computer that I back my clonezilla images on is a purpose built computer by me.... I have a tendency to move computers when building new ones..the computer I am sitting at at the moment is my company one and is my latest build with good components....my clonezilla image backup computer is an old company one....;)
      2. When I buy in a new computer...most often nowadays a laptop that my customer has chosen....I start it set it up for them and install their software, Microsoft Office...if they have one....setup a web mail client... antivirus protection etc and then create a purpose image of that computer...Now it depends on the computer..if it has a SSD or standard drive around 256 GB - then I do a complete image with recovery partitions (if available)...but on larger drives I just save parts so that I can restore their computers....the images vary in size up to about 37GB...so ZFS compression would save me space...I will add that my customers sign an agreement to the effect that I tell them what I have saved.... I also store syspreped wims etc on the clonezilla backup server...
      I always have up to date versions of an OS so that if a computer comes in with just so much bluff software in use (customers have downloaded by mistake) and I can see that there is several different proplems then I usually recommend a restore....I used to restore to factory defaults but more ofthen than not it is just a waste of time so I restore to their OS and activate their license....
      3. The amount of drives in each OMV server varies on what I have.... but most have at least 3 to 4 TB and are sometimes a backup of a backup.....The clonezilla server backup computer I am upgrading with extra 3TB drives....I have 3x3TB at the moment and have two more available...
      4. All my computers apart from my Windows ones are using ext4..
      5. The memory in the OMV computers is all standard..sorry some have memory with heatsinks...I never build even a standard computer with memory without heatsinks....All have at least 4GB but several are using DDR2 6400. I would not be using dedup....
      On a side note:
      Resetting to factory defaults, in the Windows world, is a "PITA". Generally it means hours of re-updateing the OS, maybe adding a svc pack back in, removing all the badger-ware, other software "offers", and similar junk. Then some sort of decent firewall and virus scanner must be loaded. And all of that is necessary just to get a PC into a usable state where app's can be loaded. Still more time is involved in that. Again, a royal PITA.
      So, what you're doing in cloning prebuilt app populated images, for customers, makes sense. I'm sure they appreciate it and it's good for business.

      _______________________________________________________________

      On the storage requirement:
      After looking at your total array size (4x2TB = 8TB of disks = a 6TB array), so you must have something between 5 and 6TB of data.

      BTW: I see 75% available space filled, on an individual drive, drive pools, or array's, as "full". Any number of things can happen to where the remaining 25% can be filled quickly and cause trouble. ZFS, or any "copy on write" file system, does exactly that - copies on a write - so a reasonable chunk of free space is required. To my way of thinking 25% free space defines "reasonable".
      I provision for storage starting at 25% fill, and start looking at expansion when the fill percentage exceeds 55 to 60%. Again, the 25 - 75% start and end points are just my opinion.

      And I agree with your point that you may not need ZFS on your server. It's just a matter of shuffling or allocating your images in a way that makes sense to you, uses your available drive space well and, the most important part, backing it up. But for the sake of discussion....

      Based on a 4x4TB drive array, what I consider to be the safe limits for RAIDZ1 (or RAID5 equivalent), you'd have a 12TB array. Even if you stretched the array to 5x4TB disks (I believe that's risky), it would be 16TB. 6TB disks in the array might get you into a size that you might like but in terms of a single drive failure, that can cascade into an array failure, the risk really starts to climb. (So does the cost.) Regardless, ZFS will allow you pool vdev's and achieve truly enormous pools; however, on any one server, I (personally) would be reluctant to put that many "data eggs" in one basket. It's a matter of risk trade off's and what you're comfortable with.
      _____________________________

      In practical terms:
      Since you have hardware available, I'd seriously consider dividing up your data store between two servers. How? If the store is divided based on image file dates, the system you have in place (alphabetical by customer name) could remain unchanged. I'd consider creating a second server as an "archive server" running OMV. Again, everything (directory names, structures, etc.) would be a duplicate of your current data store, but anything over 2 years old (maybe 3 years old?) would be moved to the "IMAGE-ARCHIVE" server. With customer archived image folders shared to the network, in the event that an old customer needs a full restore, pulling the image onto the working server wouldn't take too long. (I'd make sure the Ethernet path between the servers is 1GB, minimum. 100MBS would be too slow.)

      [Side note - finding cloned images with a specific date time stamp can be done in the CLI with the "find" command. An easier way might be to use the find function in WinSCP and specify a before or after date. As an example, the mask * <2014-01-01 generates a list of files and folders stamped older than Jan 1rst, 2014.]

      On the organization end of it; symlinks could help with what you already have. You can, literally, drop in a symlink (a redirect to a folder on another drive or the root of the drive itself), and fill the second drive to near 100% capacity with the contents appearing to be in the first drive.

      drive 1 dir
      |------->(Client_Images)
      ....................|--------------> A - D (local dir on drive 1)
      ....................|--------------> E - G (symlink to drive 2) All files and sub-dir's of the folder E - G actually reside on drive 2.)

      (In the above, the Link/shortcut would be E - G and it would point to, for example, /srv/dev-disk-by-label-2TB (or the Debian drive name/path equivalent if the server is not OMV.)

      The only limitation to symlinks, for allocating data, is the imagination. The down side is, without coming up with a symlink scheme that's easy for you to remember and understand (or, otherwise, document), several symlinks can become unwieldy. Lastly, you'd need to keep an eye on each drive's fill percentage, or automate a report that notifies you of the fill percentage from time to time.
      OMV has a feature for setting up E-mail notifications, with the results generated from a command line. That's useful for a LOT of things. BTW: WinSCP will set symlinks on most Linux boxes, to include your main server.

      _______________________________________________

      So, in the bottom line, it seems as if you're warehousing a good sized chunk of data. Further, you have PC's (many of them consumer PC's) that will function as file servers but will not accommodate a lot of drives.
      (Your work server(s) may be exceptions.)

      I'm going on the assumption that your storage scenario is constrained by the following:
      - reasonable sized drives / drive costs
      - a reasonable number of disks in an array. (Also constrained by the number of drives a consumer PC can house.)

      Subject to my own biases, notions of data storage, safety, etc.:
      I'd recommend that you consider creating a second server, strictly for the storage of archived images. (Preferably those images that are old enough to where they're not likely to be used again.) This server could be one of your OMV machines, that will accommodate 3 or 4 drives. Other than shifting images to your archive server from time to time, and the occasional backup, it could even be powered off until you need it. If the box is off most of the time, but exercised once very few months, drive life can be quite long. (Note, don't use an SSD in a server that's powered off for extended periods,)

      While ZFS RAID is great for data preservation and self healing, if you have solid backup; whether you use ZFS RAID on one or both of these servers, for compression, would be your call. As you take ZFS into consideration, remember, the cost of ZFS compression is at least one parity drive. Further, outside of intentional file duplication (copies=2) there's no point in using ZFS as the file system for a single drive. For a single drive file system, there's nothing wrong with EXT4. If you wanted file check sum scrubbing on a single drive, which would make bit errors noticeable, I'd use BTRFS.

      So what do you think?

      MergerFS and/or symlinks are still not out of the question. They can give you what you want, "pooling" with little to no downside. Give it some thought, as I'm out the door tomorrow. I'll be back next weekend.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey: edit ().

    • A last note, since you have VMWare running:

      While I mentioned this before, you could build an OMV server in a virtual machine to do test operations. I have a Virtual OMV build that I've used for testing ZFS / mdadm RAID / mergerFS, and for limited RAID failures scenarios, that has 7 each, 5GB virtual disks. (While small, lot's of disks can prove a number of RAID and pooling concepts.)

      Similarly you could set up a 4x12GB disk ZFS RAIDZ1 array, resulting in 36GB of usable space, to test ZFS compression. Then copy a single 30GB drive image onto it and see what you get. If ZFS compression doesn't give back the cost of the parity drive, 12GB or more, using ZFS solely for compression would not be very compelling .
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • flmaxey wrote:

      On a side note:Resetting to factory defaults, in the Windows world, is a "PITA". Generally it means hours of re-updateing the OS, maybe adding a svc pack back in, removing all the badger-ware, other software "offers", and similar junk. Then some sort of decent firewall and virus scanner must be loaded. And all of that is necessary just to get a PC into a usable state where app's can be loaded. Still more time is involved in that. Again, a royal PITA.
      So, what you're doing in cloning prebuilt app populated images, for customers, makes sense. I'm sure they appreciate it and it's good for business.

      _______________________________________________________________
      Updates are always a pain even on Clonezilla images if they haven't been needed for a while...BUT customers are hopeless at keeping their software available...most of my customers can't remember what they have done with their version of Microsoft Office etc....so I like saving the pain of fixing those problems...

      I like your idea of archiving older images to another computer...I can fix that....

      I will test the compression and get back to you on that one....



      flmaxey wrote:

      Based on a 4x4TB drive array, what I consider to be the safe limits for RAIDZ1 (or RAID5 equivalent), you'd have a 12TB array. Even if you stretched the array to 5x4TB disks (I believe that's risky), it would be 16TB. 6TB disks in the array might get you into a size that you might like but in terms of a single drive failure, that can cascade into an array failure, the risk really starts to climb. (So does the cost.) Regardless, ZFS will allow you pool vdev's and achieve truly enormous pools; however, on any one server, I (personally) would be reluctant to put that many "data eggs" in one basket. It's a matter of risk trade off's and what you're comfortable with.

      _____________________________
      I bought first 4x4TB drives and then because I wasn't sure what I would be using...bought a fifth one....but if 4x4 TB is the way to go...then I have a spare and I am sure I can find a use for that....lol



      flmaxey wrote:

      The only limitation to symlinks, for allocating data, is the imagination. The down side is, without coming up with a symlink scheme that's easy for you to remember and understand (or, otherwise, document), several symlinks can become unwieldy. Lastly, you'd need to keep an eye on each drive's fill percentage, or automate a report that notifies you of the fill percentage from time to time.

      OMV has a feature for setting up E-mail notifications, with the results generated from a command line. That's useful for a LOT of things. BTW: WinSCP will set symlinks on most Linux boxes, to include your main server.

      _______________________________________________
      As I said before I have been using Putty in Windows for access but have used WinSCP before and will refresh my memory on that...

      I don't mind the idea of symlinks and will follow your examples..



      flmaxey wrote:

      While ZFS RAID is great for data preservation and self healing, if you have solid backup; whether you use ZFS RAID on one or both of these servers, for compression, would be your call. As you take ZFS into consideration, remember, the cost of ZFS compression is at least one parity drive. Further, outside of intentional file duplication (copies=2) there's no point in using ZFS as the file system for a single drive. For a single drive file system, there's nothing wrong with EXT4. If you wanted file check sum scrubbing on a single drive, which would make bit errors noticeable, I'd use BTRFS.


      So what do you think?

      MergerFS and/or symlinks are still not out of the question. They can give you what you want, "pooling" with little to no downside. Give it some thought, as I'm out the door tomorrow. I'll be back next weekend.
      Yes, I hear what you are saying but it is the data integrity checksum in ZFS that tkaiser pointed out as a big plus for storing files....but if you know a better way with ext 4 I am listening....and I have read that is still early days for BTRFS...BUT if you think there is a scenario with BTRFS that will work for me....I am listening....

      If I understand what tkaiser said regarding ZFS data integrity checksum....it does this as you save the original files and when you do a back up that would show up any discrepancies between the two versions of the files...that is of course a big plus....Of course the checksum created is only as good as the original files data....;)


      bookie56
    • I'm not trying to discourage the use of ZFS. I'm using it myself, but for different reasons. In my case, (I'v mentioned this before) I have old data and it's completely beyond replacement. If lost, it's gone forever. Also, since retention is long term, I'm worried about "bitrot". How much of a threat is "bitrot"? It depends but, in most cases, it's not a huge threat. (A flipped bit that changes the color of a pixel, in a picture, might not even be noticed.) In your case, if a bit is flipped in a stored system file, well,, that's another matter.

      Initially, I didn't want to deal with ZFS because it represents added complexity. The complexity I'm referencing, after initial setup, is not in daily use. The complexity I'm worried about is what may be involved in trying to recover from a disaster. Regardless, given it's data preservation features, I decided to go with ZFS for one simple reason - with good backup on hand, I wouldn't even try to recover from a serious ZFS issue. Other than the barest of "try this" attempts, I'd simply rebuild and copy data back from backup. That's the "magic" behind solid, tested, backup. (I can't stress it enough.) With backup, Zero drama is involved.
      _______________________________________________________________________

      After this discussion:
      - I imagine that you're going to dedicate a box to being a storage server and (potentially) another one as an archive server, with folder organization that's the same as the primary server, for housing 2 or 3+ year old images.
      - And, it seems, you're going with a RAIDZ1 array. [The ZFS equivalent of RAID5, for others who may be reading this...]
      _______________________________________________________________________

      Another note or two on organization:
      Smaller sized chunks are easier to deal with when compared to one or a few out-sized folders. Accordingly, on the alphabetical break down, give some thought to folders that are a single letter. (26 folders total.) Admittedly, "Z" and similar letters will have few users, if any, but in the commonly used letter spans, customer images will be split up into chunks that are easier to deal with. Smaller is better, it allows room to grow, and it still fits in your organization scheme. Breaking things up is easier to do now, beginning with a fresh build.

      As was discussed before, USB 3.0 drives work fine for boot drives and they save a sata port. Also, as noted, you can have (literally) as many clones as you like at relatively low cost. If you want to give it try, I can give you a few pointers. Otherwise, do a standard build from a CD. (I'd go with the latest 3.X OMV version. The last I heard, ZFS hasn't been ported to 4.0 yet.)
      ______________________________________________________________________

      Setting up ZFS on OMV is pretty straight forward.

      - I'd start with a clean build.
      - Load the OMVextras plugin.
      - Then, install the ZFS plug in.
      (If you need help with the above, advise. I'll add more detail.

      To get started, with your 4X4TB disks installed:
      - Under, Storage, Physical Disks;
      Click on the drives to be included in the array (/dev/sdb, etc) and Wipe each drive, one at a time. Using "Quick" works fine. (While I don't know if it's possible, don't fat finger it and try to wipe your boot drive!)

      Under Storage, ZFS, click on Add Pool.
      You'll get a dialog box as follows. The Pool Name and Mountpoint can be anything you like but the / as the start of the Mountpoint is not an option. Note the highlighted entries below and select accordingly.




      Save it and the pool is created. It will appear in the Overview tab, under ZFS. With 4x4TB, You should have something in the neighborhood of 12TB or a bit less.
      ___________________________________

      Note that ZFS was born on Solaris which, when it comes to file and folder permissions, is a bit different from Linux,. I'll assume you want Linux permissions (a very good idea, BTW), and that you want to turn compression on, so a few ZFS attributes need to be edited.

      Click on / highlight your newly created ZFS pool in the Overview tab, then click on Edit. The following dialog box appears.
      The entries highlighted, in the following, will need to be changed.

      When you click on a line, an entry line opens up with save and cancel buttons. Type each entry as you see them highlighted in the following and click on save. When all entries are modified and saved, click on the save button at the very bottom.

      ((The following capture is from my active server so the name and mountpoint are /ZFS1, versus just /ZFS in the capture above. The mountpoint was set when you created the pool. It does not need to be edited here.))


      After saved, and after the above dialog box clears, click on your ZFS pool and Edit again, and recheck these entries. If they're not perfect when typed in and saved, an error entry will not take. Going out and coming back in again will reveal if any errors were made. Fix as/if needed.

      That's about it for a RAIDZ1 - ZFS pool, for your scenario.
      _________________________________________________________-

      To take advantage of ZFS error detection, you'll need to activate scrubbing routines on a schedule and, to get a heads up if something is going wrong, you'll need E-mail notifications.

      Under System, Scheduled Jobs, I have the following jobs setup;




      Of note is that ZFS copies on a write and, in the process, does a check summed file integrity check. Great, but that does nothing for files that haven't changed recently. (I.E. your image files will remain, for the most part, untouched.)

      (A summary of the jobs set above.)
      - A zpool scrub does checksum file integrity checks on files that have not been written to.
      - The zpool status reports what was found during the last scrub.
      - A zpool scrub -s command stops a scrub which I believe is important, if rebooting. Accordingly, in an unlikely event, this job stops any scrub operation that may still be underway, hung, etc., about 30 minutes before the monthly scheduled reboot.

      **In my case, with a bit more than 1TB in a mirror, with an i3 processor and 12GB ram, a scrub takes about 2 hours. I schedule the zpool status command for about 8 hours later, when I know the scrub should be complete, and have the results mailed to me.
      A last note along these lines is that, with 6TB of data, you'd be looking at 12hours of steady drive activity in a scrub. Given that amount of drive activity, I think doing a scrub once a month (followed by a status and, later, a reboot) is proactive enough.
      ____________________________________________

      For reporting you'd need the following:
      In your job, you'll need to set the Send email option


      And finally, to enable the server to send you notes through your ISP, you'll need to fill in the following:

      Under System, Notification:



      The sender, recipient, and username (with password) are your e-mail account. The rest depends on your ISP, their settings, etc.
      _______________________________________

      **BTW: If you happen to be in the Web GUI, you can click on ZFS, your pool ZFS, and Details. You'll get a summary of the last scrub and a list of your current settings.**

      If my ramblings are unclear,let me know, and let us all know how it went.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 2 times, last by flmaxey: edit2 ().

    • @bookie56


      Since it's in your experience, I have a question for you.

      I have a PC that's getting really old. It's an AMD Athlon, 2 core with 3GB running Vista. I've been using it for office stuff, technical drawing with older packages, and running older games on it from time to time. A few months ago it started getting slow for no apparent reason. (Also note that I have two each of the exact same model PC, bought for my 2 sons awhile back. The second PC doesn't have this bogged down speed problem.)

      I virus tested it off-line and never found anything serious (no root kits), Also, it came with it's own hardware diag's for memory, CPU, video, etc. ,etc. I've tested it on multiple occasions and nothing showed an error. Just to get past the software bloat problem, I rebuilt it with Win 7 from scratch and replaced the hard drive with a new one. That didn't help so I went back to a factory reinstall of Vista, from original disks. Still, it's just damned slow for no explainable reason.
      ______________

      From what I know of modern day electronics (most of which is CMOS):
      I know that CMOS has a shelf life meaning that, unlike older TTL tech, it won't last indefinitely. However, with CMOS, a working life of at least a couple decades can be expected. As CMOS gets blown gates and, otherwise degrades, it produces intermittent issues where it may work, or it may not, and produce bursts of errors, etc. So, while it's pure speculation on my part, I think that's what is going with this box. I think it's simply "degrading" and getting worse over time as more gates degrade.

      Since you have had several exposures to multiple models and items, some of which must be fairly old, I have to ask; have you had similar experiences along these lines? (A completely stock, dog slow computer, with no explanation as to why?)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • flmaxey wrote:

      Since it's in your experience, I have a question for you.

      I have a PC that's getting really old. It's an AMD Athlon, 2 core with 3GB running Vista. I've been using it for office stuff, technical drawing with older packages, and running older games on it from time to time. A few months ago it started getting slow for no apparent reason. (Also note that I have two each of the exact same model PC, bought for my 2 sons awhile back. The second PC doesn't have this bogged down speed problem.)

      I virus tested it off-line and never found anything serious (no root kits), Also, it came with it's own hardware diag's for memory, CPU, video, etc. ,etc. I've tested it on multiple occasions and nothing showed an error. Just to get past the software bloat problem, I rebuilt it with Win 7 from scratch and replaced the hard drive with a new one. That didn't help so I went back to a factory reinstall of Vista, from original disks. Still, it's just damned slow for no explainable reason.
      ______________

      From what I know of modern day electronics (most of which is CMOS):
      I know that CMOS has a shelf life meaning that, unlike older TTL tech, it won't last indefinitely. However, with CMOS, a working life of at least a couple decades can be expected. As CMOS gets blown gates and, otherwise degrades, it produces intermittent issues where it may work, or it may not, and produce bursts of errors, etc. So, while it's pure speculation on my part, I think that's what is going with this box. I think it's simply "degrading" and getting worse over time as more gates degrade.

      Since you have had several exposures to multiple models and items, some of which must be fairly old, I have to ask; have you had similar experiences along these lines? (A completely stock, dog slow computer, with no explanation as to why?)
      Things going slow are a pain in the bum and you have an interesting one...

      Usually a new hard drive can speed things up but you have tried that....

      Have a look at services being used....a lot of the time you can turn off the ones you don't se and that can fix things....

      Iook at his thread for speeding up boot times etc...I have made things a lot better by using this...not to be used on SSD drives though....

      If you have gone back to the original installation....have you removed bloatware etc?

      Make sure you have no indexing going on....can really slow down a computer....

      BTW please give me some time regarding zfs...bit busy at the moment and haven't got much time to work on things.....

      Not saying that you are using java but if that has problems you can have many instances of that service running and the computer will stand still...just an outside of the box thought...

      bookie56

      The post was edited 1 time, last by bookie56 ().

    • bookie56 wrote:

      Things going slow are a pain in the bum and you have an interesting one...
      Usually a new hard drive can speed things up but you have tried that....

      Have a look at services being used....a lot of the time you can turn off the ones you don't se and that can fix things....

      Iook at his thread for speeding up boot times etc...I have made things a lot better by using this...not to be used on SSD drives though....

      If you have gone back to the original installation....have you removed bloatware etc?

      Make sure you have no indexing going on....can really slow down a computer....

      BTW please give me some time regarding zfs...bit busy at the moment and haven't got much time to work on things.....

      Not saying that you are using java but if that has problems you can have many instances of that service running and the computer will stand still...just an outside of the box thought...
      I've reset to factory defaults once before and that pig was never was never this slow, even with the bloatware still on it. (Which I removed AGAIN. Between the bloatware and Windows updates,, what a time wasting PITA....)

      Anyway, sometimes it freezes for a few seconds or so. That phenomenon, freezing (as if I asked it to solve Pi to infinity), is new. I've been in the last 6 months or so and as time goes on, the frequency is increasing. I may swap out the PS just to be sure (if the PS is getting dirty, with AC ripple on a voltage, that would do weird things). In any case, given it's age and the low cost of refurb's that are much more up-to-date, it may be time to pitch it.
      _______________________________________________

      On the ZFS thing, what you're contemplating is a production environment setup. And depending on what you decide to adopt, it may change the way you've been doing things. Accordingly, it makes sense to take your time, think it through.

      I'm on the forum regularly and, since winter is setting in, I won't be running the road nearly as much.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • Hi again!
      Up early! could not sleep...got paperwork to do....help....
      Another thing you could do is try...Driver Verifier to see what gives...follow this thread for that he is brilliant.this thread. When reinstalling an OS...like Vista, XP , and Windows 7 I almost never use the recovery software....
      I use an updated version of the software and then activate the license...

      I often use this software to update older versions of an OS...the problem with Microsoft they don't think we have any rights....when we purchase a computer with their OS that should be good enough for Microsoft.....not a chance....they have made it harder to update older versions of Windows bý introducing security updates that make your life impossible...

      I now have Windows update turned off on my Windows 7 machines and I keep a check on the updates that I install...to avoid telemetry updates....

      Microsoft wont get it into their heads that a lot of us don't want Windows ******* 10 but will do everything to make us have it....

      As a company they are **** and I lost respect for them a long time back...

      Run extensive hardware tests to see what gives....

      You seem to be like me and like answers to a riddle....

      I am going to try and have time for ZFS testing this weekend....

      bookie56
    • Hi guys :)
      I have been doing some testing with zfs and having a few problems....
      Have a standard install of Debian 9 in VMware and have installed zfs and it is working...

      I have been following a "How To" seen here and things have gone OK up to a point...

      Here is the state of my pool I have created at the moment:i

      Source Code

      1. root@debian:/home/martyn# zpool import e35pool
      2. root@debian:/home/martyn# zpool status
      3. pool: e35pool
      4. state: ONLINE
      5. scan: none requested
      6. config:
      7. NAME STATE READ WRITE CKSUM
      8. e35pool ONLINE 0 0 0
      9. mirror-0 ONLINE 0 0 0
      10. sdb ONLINE 0 0 0
      11. sdc ONLINE 0 0 0
      12. mirror-1 ONLINE 0 0 0
      13. sdd ONLINE 0 0 0
      14. sde ONLINE 0 0 0
      15. mirror-2 ONLINE 0 0 0
      16. sdf ONLINE 0 0 0
      17. sdg ONLINE 0 0 0
      18. mirror-3 ONLINE 0 0 0
      19. sdh ONLINE 0 0 0
      20. sdi ONLINE 0 0 0
      21. mirror-4 ONLINE 0 0 0
      22. sdj ONLINE 0 0 0
      23. sdk ONLINE 0 0 0
      24. errors: No known data errors
      25. root@debian:/home/martyn#
      Display All



      I thought I would try exporting my pool and then import again by-id:

      Source Code

      1. root@debian:/home/martyn# zpool export e35pool
      2. root@debian:/home/martyn# zpool status
      3. no pools available
      4. root@debian:/home/martyn#
      Then I try to import it again by-id:

      Source Code

      1. root@debian:/home/martyn# zpool import -d /dev/disk/by-id e35pool
      2. cannot import 'e35pool': no such pool available
      3. root@debian:/home/martyn#
      But if I run zpool import it is there:

      Source Code

      1. root@debian:/home/martyn# zpool import
      2. pool: e35pool
      3. id: 13017184783498490140
      4. state: ONLINE
      5. action: The pool can be imported using its name or numeric identifier.
      6. config:
      7. e35pool ONLINE
      8. mirror-0 ONLINE
      9. sdb ONLINE
      10. sdc ONLINE
      11. mirror-1 ONLINE
      12. sdd ONLINE
      13. sde ONLINE
      14. mirror-2 ONLINE
      15. sdf ONLINE
      16. sdg ONLINE
      17. mirror-3 ONLINE
      18. sdh ONLINE
      19. sdi ONLINE
      20. mirror-4 ONLINE
      21. sdj ONLINE
      22. sdk ONLINE
      23. root@debian:/home/martyn#
      Display All

      Is it just a case of things changing with the development of zfs?

      Does anyone know what the problem is and how I can fix it?

      bookie56
    • So the question seems to be:
      zpool import -d /dev/disk/by-id e35pool doesn't work

      where

      zpool import
      works (and we'll set aside the possibility of a hidden or special character, that can't be displayed)
      __________________________________

      In looking for an answer to low level nuts and bolts issue like that, you might be on the wrong forum. It might be best to post the above on a "ZFS on Linux" forum. Perhaps there's something related in their user discussion archives.

      Things to think about:
      - When running a VM on any hypervisor, in most cases, you're using software to fool software into believing it's running on hardware. As a consequence, unintended interactions may crop up. I'll use VM's for a proof of concept but, when digging down into the nuts and bolts of virtualized hardware, bizarre things are found. (And, really, it doesn't matter how good the virtualization package is.)
      - ZFS is an external package, not native to Linux, which means there may be version issues. ((This might be one of the reasons for the behavior observed. A minor bug in an older version?))

      While I realize you were following a tutorial:
      In a small business production environment, why would you want to export/import a pool? The only reason I can think of is, physically "migrating" a pool to another machine. While this may be a common occurrence in data centers, in a SOHO environment, it should only be needed after a failure. If an pool import doesn't work, on a new platform, that's what backup is for. :)

      On the other hand, perhaps there's a ZFS expert on the forum. To get their attention, re-post the above to Storage, General or Storage, RAID.

      (However, I have to hand it you. You took "testing" suggestion seriously. :D )
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • Hi flmaxey!
      Yes, I hear what you are saying but by the same token I want to test every aspect of ZFS and willl definitely go on their forum and see if I can get a better understanding of the problem....

      I don't want this in a production environment until I am happy with how the nuts and bolts work...

      Thanks for you comments!

      Always appreciated....;)

      bookie56
    • bookie56 wrote:

      Hi flmaxey!
      Yes, I hear what you are saying but by the same token I want to test every aspect of ZFS and willl definitely go on their forum and see if I can get a better understanding of the problem....

      I don't want this in a production environment until I am happy with how the nuts and bolts work...

      Thanks for you comments!

      Always appreciated....;)

      bookie56
      I'm glad you didn't take my post wrong. After I posted it and re-read it,, jezz... I wasn't passing judgement.
      ______________________________________

      I would say this, however. Since it's an independent package, if/when you set up a ZFS array, I wouldn't upgrade the ZFS package thereafter. As I remember, you mentioned this before in the thread: "If it works, don't fix it". (Or something along those lines.) That's a good policy.

      At this point, even when I upgrade OMV and Debian packages, I make sure I have a clone of the old boot disk. (In my case, a USB 3.0 drive.) If needed, I want to be able to punt.

      When it comes to my ZFS mirror, the ZFS version is going to stay the same until 1 of 2 things happen:
      1. The array dies or is recreated.
      2. I have high confidence that an upgrade will be completely transparent.

      (I suspect it will be "1" - probably a rebuild a few years down the road.)
      ______________________________________

      On bizarre things in a VM:
      In a Virtual Box VM, I booted into it using Gparted and looked at virtual disks. Gparted acknowledged the existing partitions and a Linux RAID array, but it didn't like the Superblock and a couple of other details as I remember. In the bottom line, a VM software simulation of hardware is far from perfect.

      In any case, here's to hoping you have a good out come.
      (Since you're crossing your t's and dotting your i's, I expect that it will go well.)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • Hi flmaxey!
      No chance...I appreciate your time as I do everyone here even tkaiser who seemed to think I was ungrateful....

      I will look at the restrictions of testing zfs in a virtual machine....and even remove the pool and add it directly with disk id...

      Just want to get a basic feel for the commands and the information portrayed in the terminal...

      I will even check the ZFS package I have installed at the moment to keep tabs on that....

      I am always on top of having backups of my system disks whether it is in Windows or Linux...and I have taken several at different stages while I set up Debian 9 on my work server....

      I have noticed that this time when installing Clonezilla which I usually use went without a hitch and I am not sure why yet....but it workis....lol

      As soon as I get this set up in production I will capture my system drive again....so that I have a quick reset....

      bookie56
    • Hi again!
      Well, the problem with me adding my drives in VMware by-id was a simple one as it turned out...

      I needed to edith the VMware .vmx file and add the following to activate adding drives in zfs by.id:


      Source Code

      1. disk.EnableUUID = "TRUE"

      Then I had no problem running the following:

      Source Code

      1. root@debian9:/home/martyn# zpool import -d /dev/disk/by-id e35pool
      2. root@debian9:/home/martyn# zpool status
      3. pool: e35pool
      4. state: ONLINE
      5. scan: none requested
      6. config:
      7. NAME STATE READ WRITE CKSUM
      8. e35pool ONLINE 0 0 0
      9. mirror-0 ONLINE 0 0 0
      10. scsi-36000c29ed34070cd8c9a15eaaa336c08 ONLINE 0 0 0
      11. scsi-36000c291ff225f9a09c88ec36e2326f7 ONLINE 0 0 0
      12. mirror-1 ONLINE 0 0 0
      13. scsi-36000c29f2e34da942e891a5fcbf8d7e3 ONLINE 0 0 0
      14. scsi-36000c2972b2409ec5da5bc20a44405be ONLINE 0 0 0
      15. mirror-2 ONLINE 0 0 0
      16. scsi-36000c2963f8711d61e9997fd25f36891 ONLINE 0 0 0
      17. scsi-36000c29681ae7a852b3318af44e7f495 ONLINE 0 0 0
      18. errors: No known data errors
      19. root@debian9:/home/martyn#
      Display All

      bookie56
    • flmaxey wrote:

      The entries highlighted, in the following, will need to be changed.

      When you click on a line, an entry line opens up with save and cancel buttons. Type each entry as you see them highlighted in the following and click on save. When all entries are modified and saved, click on the save button at the very bottom.

      ((The following capture is from my active server so the name and mountpoint are /ZFS1, versus just /ZFS in the capture above. The mountpoint was set when you created the pool. It does not need to be edited here.))


      After saved, and after the above dialog box clears, click on your ZFS pool and Edit again, and recheck these entries. If they're not perfect when typed in and saved, an error entry will not take. Going out and coming back in again will reveal if any errors were made. Fix as/if needed.
      @flmaxey: Very detailed instruction! But one addition: Meanwhile it is recommended to set compression not to "on" but to "lz4". The advantage is it checks if a file is already compressed. In that case compression is stopped at an early stage. Therefore it doesn´t matter to use lz4 compression also for already compressed content.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • flmaxey wrote:

      In a small business production environment, why would you want to export/import a pool? The only reason I can think of is, physically "migrating" a pool to another machine.
      I have a different opinion. There are others reasons conceivable to want to export a pool in a situation where someone want to be absolutely sure not to threaten the data in the pool (Mount the pool read only, doing some "house keeping" with the filesystem where the pool is normally mounted, and so on).
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • bookie56 wrote:

      Hi again!
      Well, the problem with me adding my drives in VMware by-id was a simple one as it turned out...

      I needed to edith the VMware .vmx file and add the following to activate adding drives in zfs by.id:
      I was a VM issue !!.... The old saying applies here, it's better be lucky then to be smart. :D


      Frankly, I'm amazed that your ferreted out such a detail.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119