Time machine backup on SMB share

    • OMV 4.x
    • geaves wrote:

      could that write up be used in the same way but for OMV? Or is there something within that process that would fail.
      Unfortunately the whole write up is a fail since it only tries to cope with creating backups (using the pretty unreliable tmutil setdestination method). This tutorial totally ignores two things:

      • why we do backup: to be able to (do a full) restore. This won't work with such a hack since the share is not announced automagically in the network. So once your Mac's drive died you won't be able to restore from your backup unless you're an expert. Everything that requires special knowledge or manual work around backup is bad
      • it also ignores reliability. An SMB server doing TM backups must at least support the F_FULLFSYNC Extension to reliably store backups. Otherwise data corruption will happen anyway. So trying this prior to Samba 4.8 is a horribly bad idea

      The web is full of dangerous TM tutorials and confusion (like the one spread here, not understanding that TM over AFP still works flawlessly). It started back with OS X 10.5 when people proposed ugly hacks to make TM work over SMB, NAS vendors like Synology advertised TM capabilities without using an appropriate Netatalk version supporting the recently introduced new AFP calls and so on. Then people loose data, forget about these complicated hacks but of course the tutorials remain...
    • tkaiser wrote:

      Works as expected, restore and disaster recovery included.
      That is interesting. Mine looks just the same.

      Source Code

      1. root@korostorage:~# afpd -V
      2. afpd 3.1.9 - Apple Filing Protocol (AFP) daemon of Netatalk
      3. This program is free software; you can redistribute it and/or modify it under
      4. the terms of the GNU General Public License as published by the Free Software
      5. Foundation; either version 2 of the License, or (at your option) any later
      6. version. Please see the file COPYING for further information and details.
      7. afpd has been compiled with support for these features:
      8. AFP versions: 2.2 3.0 3.1 3.2 3.3 3.4
      9. CNID backends: dbd last tdb
      10. Zeroconf support: Avahi
      11. TCP wrappers support: Yes
      12. Quota support: Yes
      13. Admin group support: Yes
      14. Valid shell checks: Yes
      15. cracklib support: Yes
      16. EA support: ad | sys
      17. ACL support: Yes
      18. LDAP support: Yes
      19. D-Bus support: No
      20. Spotlight support: No
      21. DTrace probes: No
      22. afp.conf: /etc/netatalk/afp.conf
      23. extmap.conf: /etc/netatalk/extmap.conf
      24. state directory: /var/lib/netatalk/
      25. afp_signature.conf: /var/lib/netatalk/afp_signature.conf
      26. afp_voluuid.conf: /var/lib/netatalk/afp_voluuid.conf
      27. UAM search path: /usr/lib/netatalk//
      28. Server messages path: /var/lib/netatalk/msg/
      Display All

      Source Code

      1. merula-pc:~ lars$ sudo sw_vers
      2. Password:
      3. ProductName: Mac OS X
      4. ProductVersion: 10.13.6
      5. BuildVersion: 17G65
      6. merula-pc:~ lars$ sudo tmutil destinationinfo
      7. ====================================================
      8. Name : backup
      9. Kind : Network
      10. URL : afp://lars@korostorage._afpovertcp._tcp.local./backup
      11. ID : F8597DEE-F8CD-40ED-85DF-2D5877383F58
      Display All
      I did not modified the default settings, neither for AFP in general nor for the share itself. The only change in the share is that I enabled the time machine support. The share resides on an LUKS-encrypted RAID-5 of the OMV but that should not make any difference, shouldn't it?
      I also tried to connect the share with user credentials from OMV as well as a guest (with guest read and write access enabled). Make no difference. Any more hints what I could try?
    • Perhaps there are clues in my experience this week:

      My TM has not been working on two servers. I just stopped using it. With @tkaiser asserting that it works; I decided to reinstall the plugin. It did not work. I then thought about wether this may be a file system issue. I have UnionFS which is actually mergerfs holding together my data that resides on multiple EXT4 formatted drives. I then started to recall all the adjustments I had to do if I wanted to have my PlexDB on my fused volume. So I made the same adjustments to the /ect/fstab that I need to to use that volume to hold my Plex transcode directory and PlexDB. TimeMachine now sort of works — I have to investigate if I am really having issues with multiple clients using the same volume to TM their Macs.


      OMV 4.x on intel
    • ajaja wrote:

      TimeMachine now sort of works

      Apple invented an own AFP call for TimeMachine back at 10.5 times and did the same with SMB defining a new SMB Extensions for the simple reason to sync data to disk without having layers in between that do not do this (keep data cached). I would never use any FUSE based filesystem for my TM shares. Never!

      My choices are either ZFS or btrfs since those CoW filesystems allow for snapshot creation so you can always revert back to older snapshots if your sparse bundle gets corrupted. With btrfs a recent kernel and also a recent Debian version is important (since older Debian releases ship with a horribly outdated btrfs-tools package that will cause more harm than do any good if you need to query/repair btrfs filesystems)
    • TL;DR: does 'fruit:time machine max size = 200G' work with OMV5/Samba 4.9.5?

      Full story:

      I am trying to revive old DNS-325 NAS box (armel single core 1.2GHz CPU, 256MB RAM) replacing original D-Link firmware with OMV. Version 4 is just fine, but Time Machine backups over AFP are slow. Version 5 of OMV is no-go for this hardware since it uses always running configuration agent which takes near 30MB RAM and it is so slow that any change takes minutes to be applied.

      In order to use TimeMachine backups I rebuilt Samba 4.9.5 packages from buster on stretch (including all dependencies of course, 84 packages total were built to build samba, but only 10 or so are required for upgrade on top of official stretch versions). After adding a repo I did 'apt install samba' and the system was upgraded without any issues. So now I have OMV4 with Samba 4.9.5 with TM support.

      Next I tried to patch OMV4 to add TM support and TM quota options to UI, but later I decided to stay on the original OMV4 and use Extra configs for SMB service and SMB shares (also optionally disabling Avahi for SMB using internal Samba mDNS option). Everything is fine except TM quotas.

      When I have no Time Machine quotas set, it works great, much better for me than AFP version. But when I add the line 'fruit:time machine max size = 200G' in the share configuration, this share becomes inaccessible via SMB neither for TM nor for plain files - Mac just can't open the folder. This is the only option which breaks the access. I tried it in combinations from OMV5 fruit configs to many others, no luck.

      The questions are:
      - why OMV5 does not support this option (at least, when I looked at it, it did not) - is there a reason for?
      - anybody tried it at all?
      - any hints how to limit TM disk space not using a whole filesystem user quotas?

      Ideally I wish to limit the quota per user, but even quota per share would be good enough (I can have one TM share per user).

      I also tried to put misc com.apple.TimeMachine.quota.plist (with leading dot and without it) to different places, and it doesn't seem to work at all (ignored by TM clients).

      Any help is highly appreciated.

      The post was edited 5 times, last by osnwt ().

    • It seems that I found the problem - it is a bug in samba source code if built for armel architecture (it may work on x86/64).

      In vfs_fruit some data types are used: off_t and size_t. The first is normally used for file sizes and is 8 bytes long (as configured for samba build), the second is used to count objects and is 4 bytes long (according to my build logs). But in the vfs_fruit code they for some reason compare free space with SIZE_MAX constant which is for size_t and is 4GB max. So the overflow is detected and error returned.

      There are two options to fix that: either set size_t to 8 bytes globally (I do not know where else they may mix file size type with object count type), or fix this only issue locally for vfs_fruit. I will try to do the second for now and see if it solves the problem.
    • After investigation I realised that this bug is already known and reported. The reporter's solution is not complete, so I added a comment to it. I rebuilt vfs_fruit and confirmed that it works properly. Now I am rebuilding the whole samba-4.9.5 package for clean upgrades.

      The bug report is here: bugzilla.samba.org/show_bug.cgi?id=13622
      It seems no one is fixing it since 2018. Maybe you may push Ralph to put fixes upstream.

      Apart from samba package upgrade via apt it seems no more changes is required to use it (including TimeMachine over SMB) with OMV4 which, as I said, is the only version running well on low powered NAS boxes.
    • osnwt wrote:

      It seems no one is fixing it since 2018.
      Probably because most arm cpus now are armhf/arm64. Not many people use armel anymore especially with samba.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • It doesn't matter assuming that fix is:
      - known
      - trivial
      - fixes bug in the officially offered samba package for armel

      I agree that size_t should "store the maximum size of a theoretically possible object of any type (including array)". But at the same time the code mixes off_t and size_t both used for representing file sizes (say, band sizes) when other modules use uint64_t. So if I was a maintainer, I would fix it asap. But I am not a maintainer, so I've fixed it for myself.