Beiträge von jollyrogr

    DIRTY(Create a share named "bay_3_$UUID" to identify a physical disk). You should probably use stickers with the UUID on it. Of course, a user editable label/name is nice for those without proper backplane modules.

    Tags work for me, and I can also use the ledmon tool to flash the LED on /dev/sdX so I can identify which bay the drive is in. My server has 24 bays so I need to have a method to stay organized. Using share names as you've suggested would break mergerfs, so that is not an option for me.

    Another thing to keep in mind is that SnapRAID does not restore ownership, permissions, and extended attributes. The files that are restored will be owned by the user that ran the fix command, typically root, and the permissions will be set mode 0600.

    Good point. I think this can be easily fixed using the resetperms plugin.

    I've never tested it but I believe you're right. SnapRAID restores files only. It doesn't do an "image" based restoration. So a drive larger than 2TB, with a good sized pad, should be sufficient. (I'd go with a 3 to 4TB just to be sure.)

    Snapraid's online manual seems to indicate that you just need enough space to recover.


    DO NOT run a sync until you have repaired the array and data is restored. You will need a drive to replace the failed one. crashtest correct me if I'm wrong, but it wouldn't need to be 16TB, just large enough to hold the data that was there previously?


    I would buy another drive to fix your array and then RMA the bad one. When you get the replacement drive, keep as a spare for the next time you have a failure.

    I use wireguard. This way my phone is always on my home network so that I can use my pihole DNS 100% of the time and eliminate snooping by my phone provider.


    And in the event that I need to administer something on my home network while away from home, I can SSH to any box in a few seconds.

    At this point I'd probably wait until 7 is final and then do a fresh install of that.


    You could insert a new drive and recreate parity on it. My preference would be to copy the parity to the new drive and then replace the drive in the array. Either way should work but I think copying parity would be faster and result in less wear and tear on the drives.

    It appears you did not understand what has been said. Snapraid is not a filesystem. I use it on top of ext4 filesystem and it works very well. I absolutely recommend for home users.

    jollyrogr We can go round in circles about zfs & ECC all night if you want to but it's an old cannard and a waste of time. Latest OpenZFS statment is here:


    https://openzfs.github.io/open…to-use-ecc-memory-for-zfs

    Haha whatever. IMO ZFS itself is a waste of time. Running enterprise file system on shit hardware - play stupid games, win stupid prizes. Reminds me of the people running a NAS on a raspberry pi with USB drives in a raid5 looking for help when the array fails. That statement should be revised to say "for home users, ZFS is not needed nor recommended". But they would never say that because that would eliminate the bulk of their user base.

    It wasn't really a serious question, but I wonder what would happen if your had this sort of failure mid sync?

    SnapRAID

    What happen if a disk breaks during a "sync" ?

    You are still able to recover data. In the worst case, you will be able to recover as much data as if the disk would have broken before the "sync". But if the "sync" process already run for some time, SnapRAID is able to use the partially synced data to recover more. To improve the recovering you can also use the "autosave" configuration option to save the intermediate content file during the sync process.



    So I interpret that as you can recover 100% of synced data and as much as had been synced during the sync process until the disk failure. This is why I copy files to the array then sync before removing the original from desktop/laptop.

    Whether or not you need ECC memory is not unique to ZFS. It can be applied to any file system. The only difference is that the ZFS documentation placed more emphasis on this aspect and the others did not.

    Agree with the first two sentences. 3rd one not so much.


    Quote from Joshua Paetzel, one of the FreeNAS developers:

    Zitat

    ZFS does something no other filesystem you’ll have available to you does: it checksums your data, and it checksums the metadata used by ZFS, and it checksums the checksums. If your data is corrupted in memory before it is written, ZFS will happily write (and checksum) the corrupted data. Additionally, ZFS has no pre-mount consistency checker or tool that can repair filesystem damage. [...] If a non-ECC memory module goes haywire, it can cause irreparable damage to your ZFS pool that can cause complete loss of the storage.

    So what happens if your parity drive dies along with a data drive? Just kidding, but I've always looked at this as horses for courses. Did anyone ever try to chart the pros and cons of the various methods of handling multiple drives in OMV alongside criteria for deciding best choice?

    I recover both. I have two parity drives and 8 data drives at the moment so I can recover from a failure of any two. As my array expands I'll probably look to add a 3rd parity drive.