Posts by odinb

    Hmm, think I know what it is now! Found other people having issues with TimeMachine-backups on Big Sur if Sophos was installed, I do not have that, but on my work-machine I have "Crowdstrike Falcon".


    Updated one of my private machines to Big Sur, and Time Machine works just fine on it!


    So, it must be an issue with "Crowdstrike Falcon" on the work-machine! Will open a ticket with them!


    Sorry for the "red herring"!

    saber-rider1

    Ofcourse I know that, but as stated, my Catalina machines still backup just fine to the same destination, this should prove that it is most likely not any of those issues!


    I also know of this trick:

    Speeding up:

    https://www.imore.com/how-speed-your-time-machine-backups


    To speed up, type:

    # sudo sysctl debug.lowpri_throttle_enabled=0


    How to revert to your normal CPU activity:

    # sudo sysctl debug.lowpri_throttle_enabled=1


    Using this trick, I have managed to finish the initial backup on Big Sur several times, but incremental never works after this!

    And this:


    and then it fails again.

    And this:

    Here is another attempt:

    What settings do you have/add to you SMB/CIFS and your Time Machine share?

    Are you saying these quotas are needed, or optional to get it working?


    I can get it to do an initial backup, but after that it throws errors for all manual or hourly backups after that.


    Errors like this:

    Quote

    2021-03-21 23:23:07.533941-0500 0x6453 Info 0x0 1585 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] Backup cancel was requested.

    2021-03-21 23:23:17.630145-0500 0x6453 Error 0x0 1585 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] backupd exiting - cancelation timed out

    and:

    Quote

    2021-03-21 23:12:12.275244-0500 0x433f Error 0xc351 252 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] Failed to unmount '/Volumes/.timemachine/NAS-OMV - SMB\/CIFS._smb._tcp.local./22879FD3-1AC5-4D7F-BFB2-F8107ABEF0B2/TimeShare', error: Error Domain=com.apple.diskmanagement Code=0 "No error" UserInfo={NSDebugDescription=No error, NSLocalizedDescription=No Error.}

    and::

    Quote

    2021-03-21 23:12:05.885016-0500 0x4d38 Error 0x0 252 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] Failed to mount APFS snapshot with name 'com.apple.TimeMachine.2021-03-21-191717.backup' on volume '/Volumes/Backups of EMB-82JGLVDR' at mountpoint: '/Volumes/.timemachine/C2F98D57-DAFE-4A6A-B7F0-97E7F3D62E98/2021-03-21-191717.backup', error: Error Domain=NSPOSIXErrorDomain Code=5 "Input/output error"

    2021-03-21 23:12:05.885156-0500 0x4d38 Error 0x0 252 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] Failed to mount backup snapshot 'com.apple.TimeMachine.2021-03-21-191717.backup' at '/Volumes/.timemachine/C2F98D57-DAFE-4A6A-B7F0-97E7F3D62E98/2021-03-21-191717.backup, error: Error Domain=NSPOSIXErrorDomain Code=5 "Input/output error"

    2021-03-21 23:12:05.885264-0500 0x4d38 Error 0x0 252 0 backupd: (TimeMachine) [com.apple.TimeMachine:General] Failed to mount backup 2021-03-21-191717.backup error: Error Domain=NSPOSIXErrorDomain Code=5 "Input/output error"

    This is getting really frustrating! Have had to resort to an external USB to at least have a backup! Should not be needed, and will not work once I upgrade the rest of the family machines to Big Sur!

    Hmm, found this on reddit (https://www.reddit.com/r/MacOS…achine_on_samba_broken/):

    Quote

    got it to work, though I am not sure my solution will work for those not using zfs

    I had my vsf objects in the correct order

    I replaced vfs objects = catia fruit streams_xattr zfsacl

    with vfs objects = zfsacl catia fruit streams_xattr

    So, it would seem my change above (vfs objects = acl_xattr catia fruit streams_xattr) is similar, but for NTFS (https://www.samba.org/samba/do…html/vfs_acl_xattr.8.html). So now we have the ACL-parameter for ZFS and NTFS. My problem is, I run ext3/etx4 on my NAS, and I do not see to be able to find the corresponding ACL-setting for ext-filesystems!

    Any ideas?

    Been playing around with this also after upgrading to Big Sur, but cannot get it to work! TimeMachins fails with "error 112: no mountable file systems".

    Googled a bit, and added this to my TimeMachine share (SMB/CIFS) on ext3/ext4:

    Code
    vfs objects = acl_xattr catia fruit streams_xattr
    fruit:nfs_aces = no
    inherit permissions = yes
    min protocol = smb2

    It now starts backing up, and gets about half to 2/3rds into the backup, and then gives up! It then sits at "Waiting to complete first backup"! If I restart it, it starts "Preparing backup" and gives up after a couple of seconds! This pattern is repeatable!


    It seems like others have gotten it working, so it should be possible! Here is a dude that have it working on a Raspberry Pi: https://saschaeggi.medium.com/…with-big-sur-1e66a9650789


    And also these guys on other machines:

    https://forums.macrumors.com/t…ions-11-0-1-beta.2265193/


    Anyone out there with luck on this on OMV, or any hints on what to try? Would hate to have to build a separate setup just for TimeMachine!

    Hmm, very interesting about the network bit related to SMBD-start-time!


    I have a bond-setup on my OMV-NAS, using "balance-alb", so I changed the primary device, rebooted, and now SMBD comes up in a matter of seconds, 6 seconds to be exact, instead of the pretty exactly 2 minutes from before!

    Code
    root@NAS-OMV:~# journalctl -u smbd.service
    -- Logs begin at Mon 2021-02-22 09:42:36 CST, end at Mon 2021-02-22 09:43:07 CST. --
    Feb 22 09:42:42 NAS-OMV.kingsville.lan systemd[1]: Starting Samba SMB Daemon...
    Feb 22 09:42:43 NAS-OMV.kingsville.lan smbd[1434]: [2021/02/22 09:42:43.228126, 0] ../lib/util/become_daemon.c:138(daemon_ready)
    Feb 22 09:42:43 NAS-OMV.kingsville.lan smbd[1434]: daemon_ready: STATUS=daemon 'smbd' finished starting up and ready to serve connections
    Feb 22 09:42:43 NAS-OMV.kingsville.lan systemd[1]: Started Samba SMB Daemon.
    root@NAS-OMV:~#

    Changed back to the previous primary interface, and now it takes 3 seconds for SMBD-service to start.

    Code
    root@NAS-OMV:~# journalctl -u smbd.service
    -- Logs begin at Mon 2021-02-22 09:48:04 CST, end at Mon 2021-02-22 09:48:32 CST. --
    Feb 22 09:48:07 NAS-OMV.kingsville.lan systemd[1]: Starting Samba SMB Daemon...
    Feb 22 09:48:08 NAS-OMV.kingsville.lan smbd[1363]: [2021/02/22 09:48:08.004221, 0] ../lib/util/become_daemon.c:138(daemon_ready)
    Feb 22 09:48:08 NAS-OMV.kingsville.lan smbd[1363]: daemon_ready: STATUS=daemon 'smbd' finished starting up and ready to serve connections
    Feb 22 09:48:08 NAS-OMV.kingsville.lan systemd[1]: Started Samba SMB Daemon.
    root@NAS-OMV:~#

    This must be a weird bug! Probably related to the crappy network-manager, have seen it cause all kinds of weird issues as soon as you have interfaces on more than one network!


    The bond I have is built of a crappy Realtek Gig NIC (RTL8111/8168/8411) and a much better, more consistent one on speed Intel Gig NIC (82574L), so I prefer the Intel to be primary!


    Thanks for the hint!

    Think I solved it!


    When trying to remove the SMB-shares, it refused, and threw an error:

    Code
    Error #0:
    OMV\Config\DatabaseException: Failed to execute XPath query '//system/shares/sharedfolder[uuid='e305b821-7ae6-44b6-9c90-c6f73f71d0e0']'. in /usr/share/php/openmediavault/config/database.inc:78
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/smb.inc(137): OMV\Config\Database->get('conf.system.sha...', 'e305b821-7ae6-4...')
    #1 [internal function]: Engined\Rpc\Smb->getShareList(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getShareList', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SMB', 'getShareList', Array, Array, 1)
    #5 {main}

    So, it is telling me about the UUID that is missing!

    Re-created the section in my "/etc/openmediavault/config.xml" (using UUID in error, after making a backup this time):

    And finally rebooted, now I could see/delete the SMB-shares!

    The "/etc/openmediavault/config.xml.old" was however gone after the reboot, so guess it is automatically deleted?


    Removed my SMB-shares, and the "shared-folders" (not the content). Then re-created both of them, and rebooted!

    Now it is not throwing errors anymore! But it takes like 5 minutes for the SMBD-service to start!

    With "journalctl -b" I found the following errors:

    Code
    Feb 21 13:31:42 NAS-OMV.kingsville.lan systemd[1]: /lib/systemd/system/smbd.service:9: PIDFile= references path below legacy directory /var/run/, updating /var/run/samba/smbd.pid → /run/samba/smbd.pid; please update the unit file accordingly.
    Feb 21 13:31:42 NAS-OMV.kingsville.lan systemd[1]: /lib/systemd/system/nut-monitor.service:6: PIDFile= references path below legacy directory /var/run/, updating /var/run/nut/upsmon.pid → /run/nut/upsmon.pid; please update the unit file accordingly.
    Feb 21 13:31:42 NAS-OMV.kingsville.lan systemd[1]: /lib/systemd/system/nmbd.service:9: PIDFile= references path below legacy directory /var/run/, updating /var/run/samba/nmbd.pid → /run/samba/nmbd.pid; please update the unit file accordingly.
    Feb 21 13:31:42 NAS-OMV.kingsville.lan systemd[1]: /lib/systemd/system/idrivecron.service:8: PIDFile= references path below legacy directory /var/run/, updating /var/run/idrivecron.pid → /run/idrivecron.pid; please update the unit file accordingly.
    Feb 21 13:31:42 NAS-OMV.kingsville.lan systemd[1]: /lib/systemd/system/rpc-statd.service:13: PIDFile= references path below legacy directory /var/run/, updating /var/run/rpc.statd.pid → /run/rpc.statd.pid; please update the unit file accordingly.

    So I went ahead and fixed it according to recommended in the error. Errors no longer there after reboot, but it still takes about 2 minutes for the SMBD-service to come up:

    Code
    root@NAS-OMV:~# journalctl -u smbd.service
    -- Logs begin at Sun 2021-02-21 14:06:12 CST, end at Sun 2021-02-21 14:08:21 CST. --
    Feb 21 14:08:13 NAS-OMV.kingsville.lan systemd[1]: Starting Samba SMB Daemon...
    Feb 21 14:08:13 NAS-OMV.kingsville.lan smbd[1526]: [2021/02/21 14:08:13.589788, 0] ../lib/util/become_daemon.c:138(daemon_ready)
    Feb 21 14:08:13 NAS-OMV.kingsville.lan smbd[1526]: daemon_ready: STATUS=daemon 'smbd' finished starting up and ready to serve connections
    Feb 21 14:08:13 NAS-OMV.kingsville.lan systemd[1]: Started Samba SMB Daemon.
    root@NAS-OMV:~#

    is that normal? Don't seem to remember it taking that long for the SMBD-service to start, but I could be wrong.


    Thanks for the hand-holding! Appreciated!

    Been offline for a while due to power-outages in Texas!

    The share I removed looked something like this:

    Only, it was for the following path: "/srv/dev-disk-by-label-OMV/OMV-Media/Media/Music/Podcasts".

    The "<mntentref>b82c4f18-a619-408c-8507-5517739422ed</mntentref>" seems to be the same everywhere, but the "<uuid>" seems to differ, how can I find the correct one again?


    Searching for <smb>, I cannot find anything with that path!

    Tried changing privileges for a user for the leftover shares that are no longer needed, and it throws this error:

    Apparently too big, so here it is: https://pastebin.com/faTdcWjS



    Continuously see "IndexError: list index out of range" and "openmediavault.config.database.DatabaseQueryNotFoundException: No such object: //system/fstab/mntent[uuid='b82c4f18-a619-408c-8507-5517739422ed']."


    Guess I have some kind of corruption of my configuration now, how can I fix this?

    Hi!


    Had some old shares (several) under "Access Rights Management" > "Shared Folders". Delete-button was greyed out, and it was throwing an error when I tried to edit.

    So, I went to "/etc/openmediavault/config.xml" and found/removed one of them there, everything for that share from (and including) <sharedfolder> to (and including) </sharedfolder>, and left all others alone. Now my smb-shares are no longer working, and are, under "Services" > "SMB/CIFS" throwing the error:

    Code
    Error #0:
    OMV\Config\DatabaseException: Failed to execute XPath query '//system/shares/sharedfolder[uuid='e305b821-7ae6-44b6-9c90-c6f73f71d0e0']'. in /usr/share/php/openmediavault/config/database.inc:78
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/smb.inc(137): OMV\Config\Database->get('conf.system.sha...', 'e305b821-7ae6-4...')
    #1 [internal function]: Engined\Rpc\Smb->getShareList(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getShareList', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('SMB', 'getShareList', Array, Array, 1)
    #5 {main}

    The SMB/CIFS service is running:

    Any ideas what I can do to get my SMB/CIFS back in a working scenario?


    Thanks!

    See the same thing on mine, and it is a fresh OMV5 install!


    root@NAS-OMV:~# systemctl status nfs-blkmap

    ● nfs-blkmap.service - pNFS block layout mapping daemon

    Loaded: loaded (/lib/systemd/system/nfs-blkmap.service; disabled; vendor preset: enabled)

    Active: active (running) since Mon 2020-05-18 14:41:21 CDT; 40min ago

    Process: 341 ExecStart=/usr/sbin/blkmapd $BLKMAPDARGS (code=exited, status=0/SUCCESS)

    Main PID: 346 (blkmapd)

    Tasks: 1 (limit: 4915)

    Memory: 600.0K

    CGroup: /system.slice/nfs-blkmap.service

    └─346 /usr/sbin/blkmapd


    May 18 14:41:21 NAS-OMV.kingsville.lan blkmapd[346]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory

    root@NAS-OMV:~#

    Guess this will be coming downstream soon!


    CVE-2020-11651: An issue was discovered in SaltStack Salt before 2019.2.4 and 3000 before 3000.2. The salt-master process ClearFuncs class does not properly validate method calls. This allows a remote user to access some methods without authentication. These methods can be used to retrieve user tokens from the salt master and/or run arbitrary commands on salt minions.



    CVE-2020-11652: An issue was discovered in SaltStack Salt before 2019.2.4 and 3000 before 3000.2. The salt-master process ClearFuncs class allows access to some methods that improperly sanitize paths. These methods allow arbitrary directory access to authenticated users.


    NAS-OMV:~$ sudo apt list --installed |grep salt


    WARNING: apt does not have a stable CLI interface. Use with caution in scripts.


    salt-common/usul,usul,now 2019.2.3+ds-1 all [installed,automatic]

    salt-minion/usul,usul,now 2019.2.3+ds-1 all [installed,automatic]

    NAS-OMV:~$



    https://www.theregister.co.uk/…ation_tool_vulnerable_to/


    https://www.securityweek.com/c…8SecurityWeek+RSS+Feed%29

    By the way, the issue that several people see with resolve.conf not populating after upgrade is fixed by issuing:


    Code
    $ sudo dpkg-reconfigure resolvconf

    then reboot. At this point, /etc/resolv.conf will be updated again from your DHCP-server!