ZFS Degraded Pool

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • I have no idea. This is beyond my experience.
      I've never tried to run two pools but this (an imported + a native pool) might be something that could be recreated in a VM.
      (More on that later.)
      ________________________________________________________

      On the existing pool:
      Since you did a new OMV build from scratch, did you import the old pool, or was it just simply "there"? Maybe the import process is required for config purposes?

      As a practical approach, you could simply add a new mirror to your existing pool but, frankly, I would have done it the way you did. An independent mirror would allow moving it to a new PC later.

      Maybe someone else with experience with this will chime in.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • Humm, didn't see your last post until after I hit submit.

      Good, at least all is working.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • It depends on what you have configured... Personally, I scrutinize every update. In my case, I choose not to update UrBackup when an update was offered (all was working fine as it was), and it turned out to the right choice. The Urbackup Dev team did something with a crypto-lib that didn't work with Debian Jessie. (It wasn't available in Jessie as I recall.) I tend not to worry about the run of the mill Linux updates, the various lib's, bind, etc. They tend to be transparent.

      In any case, do you have backup of the operating system? :) If you do, with a fall back position in hand, it's reasonably safe to update. I maintain 3 copies of my USB boot drive. I update one of them and wait a week or two to make sure nothing was broken in an update. Then I clone the updated drive to the second drive. (Which I lay on top of the case, for a quick recovery, if needed.)

      I have a 3rd drive that stays in the drawer. It only gets updated on the rare occasion when I alter the actual function of the NAS, like adding a hard drive, a data share, or a new plugin. I update the 3rd drive well after I know the changes didn't break anything. The 3rd drive is my; "Good Grief", what did I do?, recovery drive. (We all can make a mistake now and then. :whistling: )
      ___________________________________________

      With a recovery path as described above, if something affects ZFS, fall back. However, outside of a full version upgrade (which I have yet to try), nothing I've done so far has affected ZFS. I've read, on this forum, that others have upgraded to OMV 4.X and the newer version of ZFS imported their pool with out an issue. (There is a prompt to update the pool to the newer version of ZFS, but it's not mandatory. Your older pool version will work fine, as it is.)

      In fact, (ignoring the prompt to update the ZFS pool); it's possible to try out the OMV version update to 4.X, with a cloned OS, to see if it works for you. If it doesn't, fall back to OMV3.X.
      ___________________________________________

      Edit:
      I'm going to take it that you're happy with the new rig. Any problems?
      What do you think happened with the old one, with the intermittent errors? (A PS, the Mobo?)
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)

      The post was edited 3 times, last by flmaxey: edit ().

    • Hi flmaxey :D
      I have had some time to test the old rig and I can't find any problems....
      I four ram slots and I have tested each and every stick in each place without a hiccup...I don't have any errors for other hardware...but it is getting on a bit....
      Anyway, I thought I would use it as an extra back up of other files....
      I have 6 1 TB drives and obvioulsly I don't want to use ZFS because it just kept on showing errors...
      I have an old software raid with lvm on an old server and the drives still are OK....
      I am not looking for performance just a little bit of fall back if a drive gets ify...
      I keep coming back to this question because I just have so much on ...don't remember everything....
      I want to pool the drives but have some redundancy....
      What is the most reliable (not using zfs) for this scenario not forgetting it isn't about performance just a little redundancy and access to them combined...
      I thinke an extra safe guard when moving drives and replacing is by addiding by-device id.....

      bookie56

      The post was edited 1 time, last by bookie56 ().

    • I'm looking at this from a troubleshooting perspective:

      If you're using the same PC (the one that showed errors with ZFS), I'd do an OMV installation to a USB drive which bypasses the Mobo sata controller.

      A couple data storage possibilities I'd entertain:

      1. Format the disks BTRFS, which works great on single disks. (I'm using BTRFS on a single 4TB backup drive.) If you want a common mount point for a few drives, maybe use the UnionFS plugin. I'd avoid LVM because it restructures disk storage which could be a real pain if a disk fails. (I know nothing about LVM volume recovery.)
      Going this route, redundancy would have to be done with something like snapraid. And while I know nothing about recovery with snapraid, many of the experienced users on the forum use and endorse it. It seems to be popular.

      2. Do a traditional mdadm RAID5 array and format it with BTRFS.
      (Assuming one doesn't know the in's and out's of snapraid.) From an admin view point, this would seem to be the best of both worlds. There's a bit of disk redundancy, with real bitrot protection, without a learning curve. The issue I'd have going this way would the potential for the mdadm array to create a couple odd errors here and there. (This machine would need to be on an UPS - the write hole in traditional RAID is real.)
      **I ran a quick test in a VM with 6 disks. It worked fine.**
      ____________________________________________________

      With either of the above - you can schedule BTRFS scrubs the disk(s) using System, Scheduled Jobs and even be notified if you like. I think I'd do scrubs every two weeks until come confidence is restored in the Mobo.

      In the second scenario (mdadm RAID/BTRFS), if I saw an error or two in a scrub, I wouldn't sweat it. The scrub would fix it. On the other hand, anything like what you were experiencing before with ZFS would be grounds for trashing that Mobo.

      Following are the basic commands for BTRFS scrubs. (Assumes mdadm RAID.) And a couple "fix it" links.
      btrfs scrub start /dev/md0
      btrfs scrub status -d /dev/md0
      btrfs scrub cancel /dev/md0

      unix.stackexchange.com/questions/32440/how-do-i-fix-btrfs
      marc.merlins.org/perso/btrfs/p…fs-Filesystem-Repair.html
      ____________________________________________________

      I'm real curious about what you decide to go with, and how it goes. Let us know.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • bookie56 wrote:

      Just as a point of reference....installing to a usb....isn't that frowned upon?
      Not if you install the flashmemory plugin right away.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Look at the plugin after you install it. To finish it, there are a couple manual edits to /etc/fstab.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • I did a scrub of my backup disk and found 4 "unrecoverable" errors. I'll be looking into what that means and how to correct it.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)

      The post was edited 1 time, last by flmaxey ().

    • How does OMV mount my raid array in fstab?
      I can't make head or tail of this...

      Source Code

      1. root@DN-Storage2:~# nano /etc/fstab
      2. GNU nano 2.7.4 Fil: /etc/fstab
      3. # /etc/fstab: static file system information.
      4. #
      5. # Use 'blkid' to print the universally unique identifier for a
      6. # device; this may be used with UUID= as a more robust way to name devices
      7. # that works even if disks are added and removed. See fstab(5).
      8. #
      9. # <file system> <mount point> <type> <options> <dump> <pass>
      10. # / was on /dev/sda1 during installation
      11. UUID=0afb8b40-521d-44d9-9ec7-6f7cdc448619 / ext4 errors=remount-ro 0 $
      12. # swap was on /dev/sda5 during installation
      13. UUID=0d12f848-7675-45a9-a104-df6755c649b3 none swap sw 0 $
      14. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      15. tmpfs /tmp tmpfs defaults 0 0
      16. # >>> [openmediavault]
      17. /dev/disk/by-id/md-name-DN-Storage2:0 /srv/dev-disk-by-id-md-name-DN-Storage2-0 btrfs def$
      18. # <<< [openmediavault]
      Display All
      Just so I am certain when things go pearshaped....
      Note...I say when...lol

      bookie56
    • Hi flmaxey!
      Please shoot me if I have asked this before....;)
      I have redone another server....and would like to transfer files from my zfs server....Yes, I can do that via network....but takes too long when it is several TB's
      I was wondering about mounting the zfs pool from a Debian installation...I have a flash drive with Debian 9 and zfs installed...just not sure how I could best mount this from my Debian installation so that file transfer to another drive connected to the server can work.....
      I have mounted mdadm raid 5 from my Debian USB but never tried zfs....
      Got any suggestions?
      bookie56
    • bookie56 wrote:

      How does OMV mount my raid array in fstab?
      I can't make head or tail of this...

      Just so I am certain when things go pearshaped....
      This is what I have for a BTRFS formatted RAID5 array. As it seems, there's no reference in fstab to mdadm RAID.
      _____________________________________________________________________
      /-----/
      # <file system> <mount point> <type> <options> <dump> <pass>
      # / was on /dev/sda1 during installation
      UUID=b9d591f9-8d94-4484-920d-f5855aee1053 / ext4 errors=remount-ro 0 1
      # swap was on /dev/sda5 during installation
      UUID=fd8acb43-a614-4cbc-b32b-b531b4ac81ab none swap sw 0 0
      /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      tmpfs /tmp tmpfs defaults 0 0
      # >>> [openmediavault]
      /dev/disk/by-id/md-name-OMV-VM:BTRFSR5 /srv/dev-disk-by-id-md-name-OMV-VM-BTRFSR5 btrfs defaults,nofail 0 2
      # <<< [openmediavault]
      _______________________________________________________________________

      The config is in /etc/mdadm/mdadm.conf
      Even at that, there's not much there, the create mask and top level details. I've never looked for member disks or the assembly information. I'm guessing some or all of that may be stored in the superblock.
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)