Solved? OMV and software raid 5

  • flmaxey:
    Sorry...please read your quote...seemed to have added my info there....;)

    Really, as it seems from the thread, your considerations may to be more about data organization than anything else. Of course getting the right storage structure would help, and go a long way toward preventing a potential disaster.

    Data organisation is easier if you have drives pooled as one otherwise you have to allocate one drive for a - f another for g - k etc but the risk is you will always have more of one letter than another.....so pooling drives saves this problem....Another thing is maybe a - f has 20GB left but the image you want to save is 35GB....;)


    tkaiser has opened my eyes to the need to restructure and ext 4 doesn't give the advantages of ZFS....



    As you can imagine this undertaking is going to take time and I don't want to have to redo things for a while....I always have a new backup of my system drives on all computers if I have upgraded the OD....so I can reset in case of problems.....


    I am better at maintaining my hardware than software....files and need to improve on that.....BUT that doesn't mean I haven't got backups of all important files....


    bookie56

    • Offizieller Beitrag

    I'm out the door again, for a few days. I'll get back to you, on return.
    Until then:
    ________________________________________________________


    Repurposing older PC hardware - it's what I'm doing these days to get around buying new hardware on a regular basis. (I see nothing wrong with re-furb's. :D ) While I enjoy a PC game, from time to time, nearly any PC built in the last 10 years will run the older games. (C&C and others.)


    On a side note regarding "gaming computers";
    I'm stunned at both the realism in today's games and the beef in the hardware required to run them. GPU's are rivaling CPU's for processing power... It's simply amazing, and so are the costs. ($400+ GPU's .......)


    If I were you, I'd do the same thing for building file servers. Hardware laying around? Install OMV and use it. Outside of a data center (where ECC is a real requirement), consumer PC's make fine file servers.


    On the fix and repair thing - I've found in both commercial and consumer electronics, the majority of failures are power supplies or, otherwise, are power related. (I'm sure you're more than aware of that.) Beyond power issues, in PC's where hardware/software interaction can cause problems, things can get real interesting.
    (While I should drop weird issues and move on, I've spent many hours puzzling out "what happened".)
    ______________________________________________________


    Some thoughts on using any pooling technique:
    The convenience associated with using pooling techniques can be far offset by the agony of trying to recover from a failure. If trying to recover a single disk, in most non-CoW disk formats, there are plenty of utilities out there. When trying to recover a disk that was part of a pool, well, you "might" find something or you might not. In this respect mergerFS shines, because pooled disks are independent with their file systems intact, below the pool. In essence, you can bust a mergerFS pool and everything is still there. (But it might be a job to sort it all out, in finding where mergerFS stashed folders and files.)
    So, is it better to reorganize your data, or become dependent on a pooling technique, or create some combination of the two? I believe that's the question.
    ((With the above noted, with solid regularly scheduled backup, many potential recovery nightmares simply fade away. With solid backup, you can pool to your hearts content.))


    As noted in the last post, I believe you're in a use case that fits a (ZFS) RAID scenario. From there, it would be a matter of the size of vdev's and disk sizes you're comfortable with. Personally, until they've been vetted over more time, I wouldn't use 8TB drives, period. I see 6TB as the absolute upper limit with 4 to 2TB being preferred. Why? As drives get larger, failure is more likely. Trusting a huge drive to that much data becomes a risk / benefit trade off that I don't think is worth taking. (But that's simply my opinion.)


    On the organization issue:
    Sorting images alphabetically is one way to organize. Have you given thought to organizing images by year, with monthly sub-dirs? (You could search on customer names, if the set the customer name as part of the file image name.) Such an approach would also give you an indication of when you could purge old data. That scheme could be further divided by image type (Win8, Win7, Vista, etc.) In any case, if you can come up with a better way to divide up the data store, that's logical and intuitive to you, the benefits are obvious. Just some thoughts.


    I'll look closer at your answers and numbers when I get back.

    • Offizieller Beitrag

    On a side note:
    Resetting to factory defaults, in the Windows world, is a "PITA". Generally it means hours of re-updateing the OS, maybe adding a svc pack back in, removing all the badger-ware, other software "offers", and similar junk. Then some sort of decent firewall and virus scanner must be loaded. And all of that is necessary just to get a PC into a usable state where app's can be loaded. Still more time is involved in that. Again, a royal PITA.
    So, what you're doing in cloning prebuilt app populated images, for customers, makes sense. I'm sure they appreciate it and it's good for business.


    _______________________________________________________________


    On the storage requirement:
    After looking at your total array size (4x2TB = 8TB of disks = a 6TB array), so you must have something between 5 and 6TB of data.


    BTW: I see 75% available space filled, on an individual drive, drive pools, or array's, as "full". Any number of things can happen to where the remaining 25% can be filled quickly and cause trouble. ZFS, or any "copy on write" file system, does exactly that - copies on a write - so a reasonable chunk of free space is required. To my way of thinking 25% free space defines "reasonable".
    I provision for storage starting at 25% fill, and start looking at expansion when the fill percentage exceeds 55 to 60%. Again, the 25 - 75% start and end points are just my opinion.


    And I agree with your point that you may not need ZFS on your server. It's just a matter of shuffling or allocating your images in a way that makes sense to you, uses your available drive space well and, the most important part, backing it up. But for the sake of discussion....


    Based on a 4x4TB drive array, what I consider to be the safe limits for RAIDZ1 (or RAID5 equivalent), you'd have a 12TB array. Even if you stretched the array to 5x4TB disks (I believe that's risky), it would be 16TB. 6TB disks in the array might get you into a size that you might like but in terms of a single drive failure, that can cascade into an array failure, the risk really starts to climb. (So does the cost.) Regardless, ZFS will allow you pool vdev's and achieve truly enormous pools; however, on any one server, I (personally) would be reluctant to put that many "data eggs" in one basket. It's a matter of risk trade off's and what you're comfortable with.
    _____________________________


    In practical terms:
    Since you have hardware available, I'd seriously consider dividing up your data store between two servers. How? If the store is divided based on image file dates, the system you have in place (alphabetical by customer name) could remain unchanged. I'd consider creating a second server as an "archive server" running OMV. Again, everything (directory names, structures, etc.) would be a duplicate of your current data store, but anything over 2 years old (maybe 3 years old?) would be moved to the "IMAGE-ARCHIVE" server. With customer archived image folders shared to the network, in the event that an old customer needs a full restore, pulling the image onto the working server wouldn't take too long. (I'd make sure the Ethernet path between the servers is 1GB, minimum. 100MBS would be too slow.)


    [Side note - finding cloned images with a specific date time stamp can be done in the CLI with the "find" command. An easier way might be to use the find function in WinSCP and specify a before or after date. As an example, the mask * <2014-01-01 generates a list of files and folders stamped older than Jan 1rst, 2014.]


    On the organization end of it; symlinks could help with what you already have. You can, literally, drop in a symlink (a redirect to a folder on another drive or the root of the drive itself), and fill the second drive to near 100% capacity with the contents appearing to be in the first drive.


    drive 1 dir
    |------->(Client_Images)
    ....................|--------------> A - D (local dir on drive 1)
    ....................|--------------> E - G (symlink to drive 2) All files and sub-dir's of the folder E - G actually reside on drive 2.)


    (In the above, the Link/shortcut would be E - G and it would point to, for example, /srv/dev-disk-by-label-2TB (or the Debian drive name/path equivalent if the server is not OMV.)


    The only limitation to symlinks, for allocating data, is the imagination. The down side is, without coming up with a symlink scheme that's easy for you to remember and understand (or, otherwise, document), several symlinks can become unwieldy. Lastly, you'd need to keep an eye on each drive's fill percentage, or automate a report that notifies you of the fill percentage from time to time.
    OMV has a feature for setting up E-mail notifications, with the results generated from a command line. That's useful for a LOT of things. BTW: WinSCP will set symlinks on most Linux boxes, to include your main server.


    _______________________________________________


    So, in the bottom line, it seems as if you're warehousing a good sized chunk of data. Further, you have PC's (many of them consumer PC's) that will function as file servers but will not accommodate a lot of drives.
    (Your work server(s) may be exceptions.)


    I'm going on the assumption that your storage scenario is constrained by the following:
    - reasonable sized drives / drive costs
    - a reasonable number of disks in an array. (Also constrained by the number of drives a consumer PC can house.)


    Subject to my own biases, notions of data storage, safety, etc.:
    I'd recommend that you consider creating a second server, strictly for the storage of archived images. (Preferably those images that are old enough to where they're not likely to be used again.) This server could be one of your OMV machines, that will accommodate 3 or 4 drives. Other than shifting images to your archive server from time to time, and the occasional backup, it could even be powered off until you need it. If the box is off most of the time, but exercised once very few months, drive life can be quite long. (Note, don't use an SSD in a server that's powered off for extended periods,)


    While ZFS RAID is great for data preservation and self healing, if you have solid backup; whether you use ZFS RAID on one or both of these servers, for compression, would be your call. As you take ZFS into consideration, remember, the cost of ZFS compression is at least one parity drive. Further, outside of intentional file duplication (copies=2) there's no point in using ZFS as the file system for a single drive. For a single drive file system, there's nothing wrong with EXT4. If you wanted file check sum scrubbing on a single drive, which would make bit errors noticeable, I'd use BTRFS.


    So what do you think?


    MergerFS and/or symlinks are still not out of the question. They can give you what you want, "pooling" with little to no downside. Give it some thought, as I'm out the door tomorrow. I'll be back next weekend.

    • Offizieller Beitrag

    A last note, since you have VMWare running:


    While I mentioned this before, you could build an OMV server in a virtual machine to do test operations. I have a Virtual OMV build that I've used for testing ZFS / mdadm RAID / mergerFS, and for limited RAID failures scenarios, that has 7 each, 5GB virtual disks. (While small, lot's of disks can prove a number of RAID and pooling concepts.)


    Similarly you could set up a 4x12GB disk ZFS RAIDZ1 array, resulting in 36GB of usable space, to test ZFS compression. Then copy a single 30GB drive image onto it and see what you get. If ZFS compression doesn't give back the cost of the parity drive, 12GB or more, using ZFS solely for compression would not be very compelling .

  • On a side note:Resetting to factory defaults, in the Windows world, is a "PITA". Generally it means hours of re-updateing the OS, maybe adding a svc pack back in, removing all the badger-ware, other software "offers", and similar junk. Then some sort of decent firewall and virus scanner must be loaded. And all of that is necessary just to get a PC into a usable state where app's can be loaded. Still more time is involved in that. Again, a royal PITA.
    So, what you're doing in cloning prebuilt app populated images, for customers, makes sense. I'm sure they appreciate it and it's good for business.


    _______________________________________________________________

    Updates are always a pain even on Clonezilla images if they haven't been needed for a while...BUT customers are hopeless at keeping their software available...most of my customers can't remember what they have done with their version of Microsoft Office etc....so I like saving the pain of fixing those problems...


    I like your idea of archiving older images to another computer...I can fix that....


    I will test the compression and get back to you on that one....




    Based on a 4x4TB drive array, what I consider to be the safe limits for RAIDZ1 (or RAID5 equivalent), you'd have a 12TB array. Even if you stretched the array to 5x4TB disks (I believe that's risky), it would be 16TB. 6TB disks in the array might get you into a size that you might like but in terms of a single drive failure, that can cascade into an array failure, the risk really starts to climb. (So does the cost.) Regardless, ZFS will allow you pool vdev's and achieve truly enormous pools; however, on any one server, I (personally) would be reluctant to put that many "data eggs" in one basket. It's a matter of risk trade off's and what you're comfortable with.


    _____________________________

    I bought first 4x4TB drives and then because I wasn't sure what I would be using...bought a fifth one....but if 4x4 TB is the way to go...then I have a spare and I am sure I can find a use for that....lol




    The only limitation to symlinks, for allocating data, is the imagination. The down side is, without coming up with a symlink scheme that's easy for you to remember and understand (or, otherwise, document), several symlinks can become unwieldy. Lastly, you'd need to keep an eye on each drive's fill percentage, or automate a report that notifies you of the fill percentage from time to time.


    OMV has a feature for setting up E-mail notifications, with the results generated from a command line. That's useful for a LOT of things. BTW: WinSCP will set symlinks on most Linux boxes, to include your main server.


    _______________________________________________

    As I said before I have been using Putty in Windows for access but have used WinSCP before and will refresh my memory on that...


    I don't mind the idea of symlinks and will follow your examples..




    Yes, I hear what you are saying but it is the data integrity checksum in ZFS that tkaiser pointed out as a big plus for storing files....but if you know a better way with ext 4 I am listening....and I have read that is still early days for BTRFS...BUT if you think there is a scenario with BTRFS that will work for me....I am listening....


    If I understand what tkaiser said regarding ZFS data integrity checksum....it does this as you save the original files and when you do a back up that would show up any discrepancies between the two versions of the files...that is of course a big plus....Of course the checksum created is only as good as the original files data....;)



    bookie56

    • Offizieller Beitrag

    I'm not trying to discourage the use of ZFS. I'm using it myself, but for different reasons. In my case, (I'v mentioned this before) I have old data and it's completely beyond replacement. If lost, it's gone forever. Also, since retention is long term, I'm worried about "bitrot". How much of a threat is "bitrot"? It depends but, in most cases, it's not a huge threat. (A flipped bit that changes the color of a pixel, in a picture, might not even be noticed.) In your case, if a bit is flipped in a stored system file, well,, that's another matter.


    Initially, I didn't want to deal with ZFS because it represents added complexity. The complexity I'm referencing, after initial setup, is not in daily use. The complexity I'm worried about is what may be involved in trying to recover from a disaster. Regardless, given it's data preservation features, I decided to go with ZFS for one simple reason - with good backup on hand, I wouldn't even try to recover from a serious ZFS issue. Other than the barest of "try this" attempts, I'd simply rebuild and copy data back from backup. That's the "magic" behind solid, tested, backup. (I can't stress it enough.) With backup, Zero drama is involved.
    _______________________________________________________________________


    After this discussion:
    - I imagine that you're going to dedicate a box to being a storage server and (potentially) another one as an archive server, with folder organization that's the same as the primary server, for housing 2 or 3+ year old images.
    - And, it seems, you're going with a RAIDZ1 array. [The ZFS equivalent of RAID5, for others who may be reading this...]
    _______________________________________________________________________


    Another note or two on organization:
    Smaller sized chunks are easier to deal with when compared to one or a few out-sized folders. Accordingly, on the alphabetical break down, give some thought to folders that are a single letter. (26 folders total.) Admittedly, "Z" and similar letters will have few users, if any, but in the commonly used letter spans, customer images will be split up into chunks that are easier to deal with. Smaller is better, it allows room to grow, and it still fits in your organization scheme. Breaking things up is easier to do now, beginning with a fresh build.


    As was discussed before, USB 3.0 drives work fine for boot drives and they save a sata port. Also, as noted, you can have (literally) as many clones as you like at relatively low cost. If you want to give it try, I can give you a few pointers. Otherwise, do a standard build from a CD. (I'd go with the latest 3.X OMV version. The last I heard, ZFS hasn't been ported to 4.0 yet.)
    ______________________________________________________________________


    Setting up ZFS on OMV is pretty straight forward.


    - I'd start with a clean build.
    - Load the OMVextras plugin.
    - Then, install the ZFS plug in.
    (If you need help with the above, advise. I'll add more detail.


    To get started, with your 4X4TB disks installed:
    - Under, Storage, Physical Disks;
    Click on the drives to be included in the array (/dev/sdb, etc) and Wipe each drive, one at a time. Using "Quick" works fine. (While I don't know if it's possible, don't fat finger it and try to wipe your boot drive!)


    Under Storage, ZFS, click on Add Pool.
    You'll get a dialog box as follows. The Pool Name and Mountpoint can be anything you like but the / as the start of the Mountpoint is not an option. Note the highlighted entries below and select accordingly.




    Save it and the pool is created. It will appear in the Overview tab, under ZFS. With 4x4TB, You should have something in the neighborhood of 12TB or a bit less.
    ___________________________________


    Note that ZFS was born on Solaris which, when it comes to file and folder permissions, is a bit different from Linux,. I'll assume you want Linux permissions (a very good idea, BTW), and that you want to turn compression on, so a few ZFS attributes need to be edited.


    Click on / highlight your newly created ZFS pool in the Overview tab, then click on Edit. The following dialog box appears.
    The entries highlighted, in the following, will need to be changed.


    When you click on a line, an entry line opens up with save and cancel buttons. Type each entry as you see them highlighted in the following and click on save. When all entries are modified and saved, click on the save button at the very bottom.


    ((The following capture is from my active server so the name and mountpoint are /ZFS1, versus just /ZFS in the capture above. The mountpoint was set when you created the pool. It does not need to be edited here.))


    After saved, and after the above dialog box clears, click on your ZFS pool and Edit again, and recheck these entries. If they're not perfect when typed in and saved, an error entry will not take. Going out and coming back in again will reveal if any errors were made. Fix as/if needed.


    That's about it for a RAIDZ1 - ZFS pool, for your scenario.
    _________________________________________________________-


    To take advantage of ZFS error detection, you'll need to activate scrubbing routines on a schedule and, to get a heads up if something is going wrong, you'll need E-mail notifications.


    Under System, Scheduled Jobs, I have the following jobs setup;




    Of note is that ZFS copies on a write and, in the process, does a check summed file integrity check. Great, but that does nothing for files that haven't changed recently. (I.E. your image files will remain, for the most part, untouched.)


    (A summary of the jobs set above.)
    - A zpool scrub does checksum file integrity checks on files that have not been written to.
    - The zpool status reports what was found during the last scrub.
    - A zpool scrub -s command stops a scrub which I believe is important, if rebooting. Accordingly, in an unlikely event, this job stops any scrub operation that may still be underway, hung, etc., about 30 minutes before the monthly scheduled reboot.


    **In my case, with a bit more than 1TB in a mirror, with an i3 processor and 12GB ram, a scrub takes about 2 hours. I schedule the zpool status command for about 8 hours later, when I know the scrub should be complete, and have the results mailed to me.
    A last note along these lines is that, with 6TB of data, you'd be looking at 12hours of steady drive activity in a scrub. Given that amount of drive activity, I think doing a scrub once a month (followed by a status and, later, a reboot) is proactive enough.
    ____________________________________________


    For reporting you'd need the following:
    In your job, you'll need to set the Send email option


    And finally, to enable the server to send you notes through your ISP, you'll need to fill in the following:


    Under System, Notification:



    The sender, recipient, and username (with password) are your e-mail account. The rest depends on your ISP, their settings, etc.
    _______________________________________


    **BTW: If you happen to be in the Web GUI, you can click on ZFS, your pool ZFS, and Details. You'll get a summary of the last scrub and a list of your current settings.**


    If my ramblings are unclear,let me know, and let us all know how it went.

    • Offizieller Beitrag

    @bookie56



    Since it's in your experience, I have a question for you.


    I have a PC that's getting really old. It's an AMD Athlon, 2 core with 3GB running Vista. I've been using it for office stuff, technical drawing with older packages, and running older games on it from time to time. A few months ago it started getting slow for no apparent reason. (Also note that I have two each of the exact same model PC, bought for my 2 sons awhile back. The second PC doesn't have this bogged down speed problem.)


    I virus tested it off-line and never found anything serious (no root kits), Also, it came with it's own hardware diag's for memory, CPU, video, etc. ,etc. I've tested it on multiple occasions and nothing showed an error. Just to get past the software bloat problem, I rebuilt it with Win 7 from scratch and replaced the hard drive with a new one. That didn't help so I went back to a factory reinstall of Vista, from original disks. Still, it's just damned slow for no explainable reason.
    ______________


    From what I know of modern day electronics (most of which is CMOS):
    I know that CMOS has a shelf life meaning that, unlike older TTL tech, it won't last indefinitely. However, with CMOS, a working life of at least a couple decades can be expected. As CMOS gets blown gates and, otherwise degrades, it produces intermittent issues where it may work, or it may not, and produce bursts of errors, etc. So, while it's pure speculation on my part, I think that's what is going with this box. I think it's simply "degrading" and getting worse over time as more gates degrade.


    Since you have had several exposures to multiple models and items, some of which must be fairly old, I have to ask; have you had similar experiences along these lines? (A completely stock, dog slow computer, with no explanation as to why?)

  • Things going slow are a pain in the bum and you have an interesting one...


    Usually a new hard drive can speed things up but you have tried that....


    Have a look at services being used....a lot of the time you can turn off the ones you don't se and that can fix things....


    Iook at his thread for speeding up boot times etc...I have made things a lot better by using this...not to be used on SSD drives though....


    If you have gone back to the original installation....have you removed bloatware etc?


    Make sure you have no indexing going on....can really slow down a computer....


    BTW please give me some time regarding zfs...bit busy at the moment and haven't got much time to work on things.....


    Not saying that you are using java but if that has problems you can have many instances of that service running and the computer will stand still...just an outside of the box thought...


    bookie56

    • Offizieller Beitrag

    I've reset to factory defaults once before and that pig was never was never this slow, even with the bloatware still on it. (Which I removed AGAIN. Between the bloatware and Windows updates,, what a time wasting PITA....)


    Anyway, sometimes it freezes for a few seconds or so. That phenomenon, freezing (as if I asked it to solve Pi to infinity), is new. I've been in the last 6 months or so and as time goes on, the frequency is increasing. I may swap out the PS just to be sure (if the PS is getting dirty, with AC ripple on a voltage, that would do weird things). In any case, given it's age and the low cost of refurb's that are much more up-to-date, it may be time to pitch it.
    _______________________________________________


    On the ZFS thing, what you're contemplating is a production environment setup. And depending on what you decide to adopt, it may change the way you've been doing things. Accordingly, it makes sense to take your time, think it through.


    I'm on the forum regularly and, since winter is setting in, I won't be running the road nearly as much.

  • Hi again!
    Up early! could not sleep...got paperwork to do....help....
    Another thing you could do is try...Driver Verifier to see what gives...follow this thread for that he is brilliant.this thread. When reinstalling an OS...like Vista, XP , and Windows 7 I almost never use the recovery software....
    I use an updated version of the software and then activate the license...


    I often use this software to update older versions of an OS...the problem with Microsoft they don't think we have any rights....when we purchase a computer with their OS that should be good enough for Microsoft.....not a chance....they have made it harder to update older versions of Windows bý introducing security updates that make your life impossible...


    I now have Windows update turned off on my Windows 7 machines and I keep a check on the updates that I install...to avoid telemetry updates....


    Microsoft wont get it into their heads that a lot of us don't want Windows ******* 10 but will do everything to make us have it....


    As a company they are **** and I lost respect for them a long time back...


    Run extensive hardware tests to see what gives....


    You seem to be like me and like answers to a riddle....


    I am going to try and have time for ZFS testing this weekend....


    bookie56

  • Hi guys :)
    I have been doing some testing with zfs and having a few problems....
    Have a standard install of Debian 9 in VMware and have installed zfs and it is working...


    I have been following a "How To" seen here and things have gone OK up to a point...


    Here is the state of my pool I have created at the moment:i



    I thought I would try exporting my pool and then import again by-id:


    Code
    root@debian:/home/martyn# zpool export e35pool
    root@debian:/home/martyn# zpool status
    no pools available
    root@debian:/home/martyn#

    Then I try to import it again by-id:


    Code
    root@debian:/home/martyn# zpool import -d /dev/disk/by-id e35pool
    cannot import 'e35pool': no such pool available
    root@debian:/home/martyn#

    But if I run zpool import it is there:



    Is it just a case of things changing with the development of zfs?


    Does anyone know what the problem is and how I can fix it?


    bookie56

    • Offizieller Beitrag

    So the question seems to be:
    zpool import -d /dev/disk/by-id e35pool doesn't work


    where

    zpool import
    works (and we'll set aside the possibility of a hidden or special character, that can't be displayed)
    __________________________________


    In looking for an answer to low level nuts and bolts issue like that, you might be on the wrong forum. It might be best to post the above on a "ZFS on Linux" forum. Perhaps there's something related in their user discussion archives.


    Things to think about:
    - When running a VM on any hypervisor, in most cases, you're using software to fool software into believing it's running on hardware. As a consequence, unintended interactions may crop up. I'll use VM's for a proof of concept but, when digging down into the nuts and bolts of virtualized hardware, bizarre things are found. (And, really, it doesn't matter how good the virtualization package is.)
    - ZFS is an external package, not native to Linux, which means there may be version issues. ((This might be one of the reasons for the behavior observed. A minor bug in an older version?))


    While I realize you were following a tutorial:
    In a small business production environment, why would you want to export/import a pool? The only reason I can think of is, physically "migrating" a pool to another machine. While this may be a common occurrence in data centers, in a SOHO environment, it should only be needed after a failure. If an pool import doesn't work, on a new platform, that's what backup is for. :)


    On the other hand, perhaps there's a ZFS expert on the forum. To get their attention, re-post the above to Storage, General or Storage, RAID.


    (However, I have to hand it you. You took "testing" suggestion seriously. :D )

  • Hi flmaxey!
    Yes, I hear what you are saying but by the same token I want to test every aspect of ZFS and willl definitely go on their forum and see if I can get a better understanding of the problem....


    I don't want this in a production environment until I am happy with how the nuts and bolts work...


    Thanks for you comments!


    Always appreciated....;)


    bookie56

    • Offizieller Beitrag

    I'm glad you didn't take my post wrong. After I posted it and re-read it,, jezz... I wasn't passing judgement.
    ______________________________________


    I would say this, however. Since it's an independent package, if/when you set up a ZFS array, I wouldn't upgrade the ZFS package thereafter. As I remember, you mentioned this before in the thread: "If it works, don't fix it". (Or something along those lines.) That's a good policy.


    At this point, even when I upgrade OMV and Debian packages, I make sure I have a clone of the old boot disk. (In my case, a USB 3.0 drive.) If needed, I want to be able to punt.


    When it comes to my ZFS mirror, the ZFS version is going to stay the same until 1 of 2 things happen:
    1. The array dies or is recreated.
    2. I have high confidence that an upgrade will be completely transparent.


    (I suspect it will be "1" - probably a rebuild a few years down the road.)
    ______________________________________


    On bizarre things in a VM:
    In a Virtual Box VM, I booted into it using Gparted and looked at virtual disks. Gparted acknowledged the existing partitions and a Linux RAID array, but it didn't like the Superblock and a couple of other details as I remember. In the bottom line, a VM software simulation of hardware is far from perfect.


    In any case, here's to hoping you have a good out come.
    (Since you're crossing your t's and dotting your i's, I expect that it will go well.)

  • Hi flmaxey!
    No chance...I appreciate your time as I do everyone here even tkaiser who seemed to think I was ungrateful....


    I will look at the restrictions of testing zfs in a virtual machine....and even remove the pool and add it directly with disk id...


    Just want to get a basic feel for the commands and the information portrayed in the terminal...


    I will even check the ZFS package I have installed at the moment to keep tabs on that....


    I am always on top of having backups of my system disks whether it is in Windows or Linux...and I have taken several at different stages while I set up Debian 9 on my work server....


    I have noticed that this time when installing Clonezilla which I usually use went without a hitch and I am not sure why yet....but it workis....lol


    As soon as I get this set up in production I will capture my system drive again....so that I have a quick reset....


    bookie56

  • Hi again!
    Well, the problem with me adding my drives in VMware by-id was a simple one as it turned out...


    I needed to edith the VMware .vmx file and add the following to activate adding drives in zfs by.id:



    Code
    disk.EnableUUID = "TRUE"


    Then I had no problem running the following:



    bookie56

  • @flmaxey: Very detailed instruction! But one addition: Meanwhile it is recommended to set compression not to "on" but to "lz4". The advantage is it checks if a file is already compressed. In that case compression is stopped at an early stage. Therefore it doesn´t matter to use lz4 compression also for already compressed content.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • In a small business production environment, why would you want to export/import a pool? The only reason I can think of is, physically "migrating" a pool to another machine.

    I have a different opinion. There are others reasons conceivable to want to export a pool in a situation where someone want to be absolutely sure not to threaten the data in the pool (Mount the pool read only, doing some "house keeping" with the filesystem where the pool is normally mounted, and so on).

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    Hi again!
    Well, the problem with me adding my drives in VMware by-id was a simple one as it turned out...


    I needed to edith the VMware .vmx file and add the following to activate adding drives in zfs by.id:

    I was a VM issue !!.... The old saying applies here, it's better be lucky then to be smart. :D



    Frankly, I'm amazed that your ferreted out such a detail.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!