Posts by crashtest

    Sorry. I know nothing useful about Windows 11 and have nothing to offer on why the Win11 client is behaving as it is.

    Still, unless M$ is completely revamping how access control works, duplicating Windows usernames and passwords on a Linux server should allow access from a Win11 client. At this point in time, Win11 would almost have to be backward compatible.

    Credentials manager was the only way to see the shares under RPI in Windows or map a folder to any of them etc. This was very weird - because I did not have to do this on other PCs in the home - a mixture of Win10 and Win11 devices.

    That is weird. In the vast majority of cases, when permissions are employed, a user can "see" an SMB share under Network, but they can't open it. These are the odd issues that I can't explain regarding Windows, but I believe they may have something to do with "Microsoft Partners" (OEM's) setting values in the registry that affect security and access control. (They're trying to "help" users.)

    When you reinstalled on the client with the access issue:
    Did you use a retail Windows disk, or was it an OEM recovery disk or perhaps a recovery partition on the hard drive?

    Windows 11.... Man... Windows 10 was suppose to be the last one, a rolling release. Windows 11 wasn't even supposed to happen. M$'s constant flux with their flavors of OS is wearing me out.

    First, I'm glad you worked out the connection issue. The short guide you looked at was the first step, to make a Client - Server connections possible within a LAN.

    The second step is permissions and access control. For new users, I addressed this topic -> here with Linux permissions that are wide open. That gets new users started but, -> by my own admission (in para 1), wide open network shares should be tightened up.

    Access control or, in the Linux World, file and folder permissions is another topic.

    Using Windows Credentials, along with a preexisting server user account (in your case the pi user), is one way to to grant permissions to a restricted server share. It is, however, a bit dangerous to suggest to new users. R-PI installations have a preexisting user account (pi). Other installations do not. For amd64, and Armbian installs, using Windows Credentials for access might encourage the user to use preexisting accounts like admin or root. This would create a considerable server security risk.

    If you want more granular access control, take a look at this doc -> Getting Started with NAS Permissions in OMV . It will show you how to set up permissions that allow users to gain access specified server shares, using their Windows user account. Try it and tell me what you think.

    It appears that you have a complex backup strategy, that may have servers backing up each other? You know, if these VM servers are running on the same hardware, your servers are not truly backed up. It's safer to backup to a separate hardware platform.

    If you're backing up clients, take a look at Urbackup. This is an enterprise level package that will backup Windows Clients in the background. (Client performance impact is very low to not noticeable.) You'll have choices between file / folder restorations or full client drive (image) restorations, that go back in time.

    Restoration is not a matter of restoring a full backup, then restoring individual incremental's to get to the latest backup. In Urbackup, there's a restoration CD to boot up a client, that contacts the server. With the boot utility, from any chosen date, an image restoration is a single operation, The icing on the cake is, for Windows clients, Urbackup does file "de-duplication". That means Urbackup only copies Windows system files once, from among several clients. It doesn't store duplicates of the same file. This saves a LOT of disk space.

    Urbackup is run from a Docker. That can be a stumbling block for new users. I would suggest setting up a VM to explore Dockers. The primary reason to get up to speed on Dockers is, soon, if you want to setup a server add-on it will most likely be a Docker. (Dockers are the safest way to add-on server packages, without consequence, on to a server.)

    Finally, there are other ways of replicating shares across servers and a way to create and maintain a full backup server with data pre-populated, if you're interested in something like that.


    When it comes to file operations, "moving" files and folders in mergerFS works like it would on a single drive, provided the source and destination are on the merged drive.

    You do have a policy (Existing Path, Most Free Space) that might need some explanation.

    If you have two "merged" drives that are empty:

    - When you create a new folder, within the collection of two mergerFS drives, the new folder will go to the disk with the "most free space" (because there is no existing path). We'll say that the folder is Videos and that MergerFS created the Videos folder on disk 1.
    - Thereafter, any file copied to the Videos folder will "always" go to disk 1 because there's an "Existing Path". Why? Because "Existing Path" is the first directive.

    - The next newly created folder will go to disk 2, since a path does not exist, which has the most free space.

    Existing Path, Most Free Space, can be problematic where Video or other large files are concerned. With a Videos folder on drive 1, all Video files copied to that folder will go to disk 1. This can (and often does) completely file drive 1 while the second drive is relatively empty.

    With the policy Most Free Space, the Videos folder is created on all drives and, as files are copied into the Videos folder, video files are distributed according to the single directive, "most free space". This has the net effect of distributing files equally among same sized drives.

    While there are arguments for or against both policies, most users will tend to get what they want with the policy "most free space".

    I can't sort out how you're using Acronis to back up. Notionally, are you backing up full disks? Are we talking about LAN Client disks?

    There are options for backing up clients to your server that include file / folder AND image backups, that will allow you selectively restore any of several backups going back in time. (individual Files, folders or the full drive.)

    If you're attempting to backup data on your server, to another disk or another location, there are options that will allow you to do that as well.
    So the question is, what are you (specifically) backing up?


    MergerFS sorts files into the mergerFS drive (the mount point for the collection of disks) when they are written ; in accordance with the policy that is in place at the time. It does not break large files into pieces and it doesn't resort or rebalance storage after the write takes place. If you want to change the policy or rebalance storage, you'll have to do that manually.

    Watch out. The netinst image gives the option to install desktop during the installation process.

    That is the reason for this screen shot and instructions, in the the 32bit / Alt 64 bit install. To try to make it a bit more clear, I'll edit the following with a caution statement.

    Of course, if users follow an internet guide, all bets are off.

    Software selection
    Only the SSH server should be selected. Make changes to reflect the following and Continue.


    The green lights for the drives are related to specific SMART stat's. While yours appear to be fine, you can check the stat's of individual drives against the following:


    It "appears" that all drives are affected but note that a single malfunctioning drive can cause errors across all drives because they're connected to the same bus. You might try disconnecting one drive at at time, but there's no guarantees how ZFS would handle that in an array that's already degraded.


    "If" all drives are actually accumulating errors:
    I agree with the consensus. Possible candidates are the PS or the SATA interface on the motherboard. If you're using a HBA card to connect your drives, you might try reseating it.)

    Another ugly possibility is ESD damage. Is your server on a protective power strip or an UPS? Have you had any odd power events recently? A surge, dropouts, brownout, a lightening strike, etc.

    Testing a switching PS is tricky. Measuring a PS's V-ripple (leaking AC components that make output voltages "dirty") is dependent on the switching frequency and available test equipment. (The frequency might be somewhere between 30KHz to 150KHz, but it could be as high as 500KHz) My multi-meter is limited to an AC range of 40Hz to 400Hz. I suspect most meters are similar. A good quality Oscilloscope might work. You'd want less than 3mv ripple, riding on the DC voltage. Preferably it would absolutely clean - 0mv. BTW, a switching supply must be tested under load.

    In the bottom line, component substitution is the easiest route.

    If you don't have backup, I'd set a backup job immediately before doing anything else.

    I'm pretty sure it's because I run FF in Privacy mode

    That would explain it. No site data OR cookies are stored, when the app is closed. Obsolete cache data wouldn't enter the picture.


    have some new 24" monitors 8) and I need to move the wall brackets for them,

    Eight? Are they to be mounted edge to edge?

    I bought a 32" high res flat screen TV, for use as a monitor. I have yet to use it because of it's bulk on a small desk. Thinking about it, using an articulating wall mount might be the answer.

    I just dealt with the same issue on Firefox. You have to delete cached data AND cookies for the website. To avoid clearing site data for all sites, I cleared cookies and cached data for OMV by IP address. Firefox allows that.

    Without clearing everything, I'm not sure how to selectively clear site data and cookies with Edge or Explorer.

    Zoki cannot add two ZFS mirrored disks to its current box. In his first post Zoki said he has a box with 4 bays and all 4 are occupied.

    If he has two additional SATA ports (or even one) on the motherboard, one or two drives can be cabled to the motherboard, laying or even hanging out of the side of the case. One has to be careful not to bump the drives with power applied but it works fine for temporary use. I've done it before.

    The second choice, degrading the array to open up a slot, is possible with the option of upgrading a single disk to a mirror. The command line VM exercise was the first time I tried that. It worked without any issues.


    As far as ZFS expansion is concerned, I suppose it's how you look at it. Upgrading ZFS is a bit more ridge than traditional RAID or LVM, but I believe the benefits are worth it. It's just a matter of opinions. Really, there's no right or wrong.

    The downside is that ZFS is not yet very flexible regarding expansion procedures.

    Not true my friend, at least in my opinion. ZFS can easily be expanded by adding another Vdev which can be a single disk.

    I believe it's better to add mirrors to a pool, but ZFS will allow any kind of Vdev, a mirror, RAIDZ1, a single disk, etc., to be added to an existing pool.

    In the following, the pool is made up of a mirror (RAID1 equivalent), RAIDZ1 (RAID5 equivalent)) and at the bottom is a basic volume (a single disk). I did have to use the force option to add the single disk.

    This was done in three separate steps, in the GUI:
    Create a mirror, then expand adding RAIDZ1, then expand again adding a basic volume

    What ZFS won't do is allow any kind of Pool contraction or a Vdev removal. (For this reason, using a single disk as a Vdev is a real bad idea.)

    BTRFS allows for an array to be downsized, as in actually contracting the filesystem size and removing devices, but I've never tested it.


    1. Is ZFS the way to go for my needs?
    2. I have a lot of hard links (rsnapshot backups). Does ZFS cope with that?
    3. Is the plan feasable in your opinion or does it need modification

    1. I think you can do it with one modification.
    - If you have the room in the case (or could temporarily cable in two drives laying beside the case) I would add a second pool, using the larger 8TB disks in a mirror.
    - Use rsync to recreate the entire structure of the existing ZFS array to the new 8TB mirror.
    - You wouldn't have to reconfigure OMV. Just redirect the shared folder (music for example) to the recreated music folder / device on the new array. The stuff layered onto the shared folder, like a samba share, will follow.

    Once the rsync job is finished, shared folders are redirected and samba shares are tested. Then the older disks could be removed without consequence. Once done the old disks could be wiped, used as a backup pool, or used in another device.

    2. I believe it will. If these hard links exist in the current array, the new array should be fine. ZFS did have issues with overlayfs (used by Dockers) a few years ago, but I believe that was solved in later pool versions.

    3. You could give your "single disk to a mirror upgrade" a try. It worked fine for me, but it's a command line operation.

    I did the following in a VM:

    zpool attach poolname existinghdd blankhdd

    zpool attach ZFS1 ata-VBOX_HARDDISK_VB5ee42a80-ac4d8f53 /dev/sdd

    (Even if it's a bit risky, an shown above, the device name will work as well. :) )

    The above upgraded a single drive to a mirror as follows: