Posts by crashtest

    Anyway, does someone has some pointers as to what happened here?

    It's hard to say what might have happened. There's a number unknown factors involved, so any answer you may get would be pure speculation.

    And what can I do to prevent something worse to happen, like the drive not working after that.

    This is the better question:

    First, you might consider an UPS. A thinclient mini PC and an external drive would work fine on a smaller, less expensive, UPS.

    Second, you might consider looking that the hard drive's SMART stat's for drive's age, health and other factors. Drives last between 5 and 7 years. (However they are known to fail sooner or to last longer.) SMART stat's will provide you with the age of the drive and indicators of developing problems.
    Finally, if you really want to keep your data you need BACKUP. Of the things that fail most frequently in a PC, mostly likely, it will be a hard drive. So, if you want to keep your data, you need to duplicate it on another drive at a minimum.

    You might consider reading the Backups and Backup Strategy section in this -> doc . It will explain a few concepts.

    what do you mean by smart stats?

    SMART stat's, in the GUI, are found under:

    Storage, SMART, Settings

    (The Enable box must be checked)

    Then go to:

    Storage, SMART, Devices

    Select the suspected drive and click on the Edit Icon.

    Monitoring Enabled must checked.

    (You might want to enable monitoring on all drives.)

    Then go to:

    Storage, SMART, Devices

    Select the suspected drive and click on Show Details Icon

    Click on the Extended Information Tab.

    Scroll down. SMART stat's are available under;

    SMART Attributes Data Structure, etc., etc.

    Copy the information, beginning with the above, down to the last numbered SMART stat.

    I logged out of Windows and logged in again and it still didn't prompt me for credentials.

    Here's a "potential" explanation. The SMB/CIF share is the first "gate keeper" for over the network access to the share. If SMB/CIF is set, for example, to "Guests Allowed", you won't be prompted for Credentials. However, if the file / folder permissions of the underlaying Shared Folder are set, for example, to READ, you won't be able to write to the share.

    Here's what you might consider doing:
    Go to the User Guide and start at -> Creating a Network Share.
    Create a new Test share. While you can name it anything you like but, in the other details, create it exactly as shown. At the end, as shown in the document, copy and paste some files into the share to verify that it works.

    With a working share, you'll be able to compare settings with your existing shares.

    This feels insecure

    ACL's might have been what got you in trouble.

    If you want everyone on your local LAN to be able to access and write to shares:

    Under Storage, Shared Folders, ACL's, Select the shared folder and go into Edit.

    - DO NOT check boxes at the top. If you have checked boxes, uncheck them.

    - At the bottom, make sure Group is set to Read/Write and Others is set to Read/Write.
    **If you have folders layered under the share, it may be necessary to set the permissions noted above, select the Recursive box and Save.**

    - Under Services, SMB/CIF select the share (that is layered on top of the shared folder) and go into Edit.

    Public should be set to Guests Allowed and Read Only should be unchecked if you want to write files to the share.

    If you're worried about restricting shares to specific users or groups, as chente has indicated, reading this might be useful -> NAS Permissions.

    Note that if you're using a Windows Client, on occasion, it may be necessary to log off and log on.

    It the ram is already in the box, since it is essentially free, I'd leave in the box. A commercial server platform is not going to be power efficient, in the majority of cases, so removing extra ram wouldn't help very much.

    On the other hand, you might consider disabling SWAP. With 256GB of ram, I can't imagine a home scenario where you would need SWAP.

    One question, can I install OMV on my Windows 11 computer and run the LG nas 42b1?

    If it was possible, you would install OMV (and Debian) on the LG NAS. OMV would be controlled by a web browser console, from your Windows 11 client.

    LG nas 42b1?

    I couldn't find the above model. Do you have an -> LG N4B2N?

    While it seems that LG NAS models are running Linux, I couldn't find anything on the internet about how to install Debian Linux on an LG NAS. (Debian Linux is required to install OMV.)

    Well, until 2.1.14 is in Debian Repo's, maybe I can do something to prevent the potential for ZFS cloning issues, for OMV users. (You know, my little part in trying to help out.) Humm... :/ Maybe I'll try a tactic that I've used before, "documenting it".

    Attention ZFS Users!

    Until you've ungraded to ZFS version 2.1.14 DO NOT ATTEMPT to CLONE a ZFS block device / filesystem. Doing so may result in data corruption!


    There... That should do it. :) 
    As we're well aware, users are good about following advice given in documentation. ^^

    ryecoaaron (Since we have you on the horn, so to speak).

    If a ZFS pool or filesystem properties are changed, those changes do not have an affect on file properties that are already in the pool. Properties changes only apply to files that are copied in after the change or to files that are altered (rewritten). Despite searching for it, I haven't found a way to "update" all existing files and folders to the latest properties.

    I've been trying to think of a way around that, of bringing all files in the pool up-to-date with the latest properties. The only thing I can come up with is the touch command, applied recursively. Do you think that would work? As I understand it, any change of a file (to include the timestamp) should result in a "Copy on Write" operation which should generate a completely new file. (I realize that would result in doubling the size of the pool and filesystems, if prior snapshots are not deleted).

    What do you think?

    Just don't run bleeding edge versions...

    With the ZFS version installed by your plugin, I don't believe there's any danger here. :) (My ZFS version is 2.1.11)

    In this particular bug case, it would probably be necessary to build or upgrade to the latest version (2.2.X) on the CLI. That's a task that a very small number of ZFS users would do. Further, I've never used any clone function and I doubt that 95% of ZFS users ever would. Those affected by this bug would most likely be advanced ZFS users.

    When all is considered, I'm of the belief that the number of affected users would be very small and, mostly likely, would be Data Center admin's.

    As you may be aware (unfortunately) any property changes will apply only to new files and those that are altered (rewritten). The only way I know of to get properties applied to "all files / folders" is copy them in again.

    On the other hand; if you're getting the permissions behavior you're looking for, as things are right now, you should be OK.

    In any case, I think you'll enjoy ZFS. Over the years I've been through a drive replacement, pool upgrades, file restorations from snapshots, etc. While I do maintain solid backup (on more than one server) I have yet to have a ZFS event that would cause me to lose faith in it. With decades of development, it's a rock solid filesystem. And ZFS' ability to detect and correct silent corruption gives me peace of mind in knowing that the data I'm backing up is error free at the source, before it's copied. (What's the point in backing up corrupted data?)

    zfs set aclinherit=passthrough (name of pool)
    zfs set acltype=posixacl (name of pool)
    zfs set xattr=sa (name of pool)
    zfs set compression=lz4 (name of pool)

    I would have recommended the above set of commands but it seems you're ahead of the game.

    You were missing the following. You might consider using it.

    zfs set aclmode=passthrough (name of pool)

    It is important to set the above properties, on the CLI, before copying data into the pool's filesystems. Otherwise, you'll have files and folders with mixed attributes.

    As far as setting other ZFS parameters (ashift and others), after looking at the plugin's defaults in a VM, I went with the defaults. If you have a 1GB network, performance should be fine with defaults and any vdev configuration you may chose. (Zmirror, RaidZ, Basic, etc.)

    When setting up drives for a pool, I'd go with "By ID".
    Using "By Path" will give you drive labels using the current path, with names such as /dev/sda, /dev/sdb, etc. The issue with this is, when a drives fails and when swapping it out, Linux (in combination with BIOS), my change the path names of the remaining drives. "By ID" will retain the names of drive pool members when a drive is physically removed.

    Once you're set up, you might consider looking at -> this doc. It's for automated, rotating, and self purging snapshots. The doc also explains a couple of restoration concepts and how to do file / folder restorations from past snapshots.

    zfs-auto-snapshot has no dependencies meaning there's little to nothing to go wrong. I've used zfs-auto-snapshot with 3 versions of OMV.

    (BTW: If you manually take snapshots before installing zfs-auto-snapshot, you'll have to manually remove them. zfs-auto-snapshot won't purge anything it didn't create.)

    One last question, should I use zfs to store docker data

    I wouldn't. While it was years ago, ZFS detected Docker's version of overlayfs as a "legacy filesystem". These filesystems where not removable without deleting Docker containers. That issue might have been corrected by the ZOL project, in recent years, but I don't know that for a fact.

    The way I approach this is to use a single disk for "utility" purposes. Docker storage, Client backup sets, etc., go on a separate disk. I store data in a ZFS mirror for data integrity purposes.


    First thing; you don't need to add users that you create to the sambashare group. The way OMV is designed, that's not necessary. For your information, in OMV, all created users are added to the group users by default.
    Second; ignore the pi user. (Only use that user for CLI SSH sessions. If you add the user myself to the ssh group, you'll be able to log onto the CLI with the myself account. Thereafter, you wouldn't need the pi account.)
    Third; you're checking ACL boxes. That's a bad idea. Until you understand how OMV does permissions, in the GUI, it's best to use ONLY Owner, Group, Others. The document I referenced "NAS Permissions" explains this.
    Lastly; I'm assuming that you haven't changed data drive permissions, recursively, on the CLI.

    Now, as I suggested before, create a NEW share according to the guide. (Name it Test or something like that.) Follow through the guide as it's written when creating the Test share. Don't change anything, don't add anything, don't change the path to something other than the default, use the permissions shown, etc. At the end, making sure to copy and paste something into the new share from a Windows Client.

    While I don't know about the Dell H710, you might want to search the net to see if the card can be flashed to IT mode.

    I flashed a Dell H200 that I'm still using. After the flash, it was a straight forward non-RAID JDOB adapter with transparent SMART pass through. Here's the -> thread that generally covers the topic. (Note that it's possible to brick the card.)