Did you try the samba option like that?
max protocol = SMB2
Did you try the samba option like that?
max protocol = SMB2
bcache / flashcache / dmcache
Update:
Backup during the night from 2 clients at same time ok
Backup from 3rd client during the day while 2 clients doing incremental backups ok
For 2 clients i used the same share and login name, also without problems.
No error msgs in logs.
Nice transferrates (tm is always slow) up to 750mb/s on omv network interface. Avg around 400-500 (from a new mbp with ssd)
What wonders me ... why is TM searching for the backup discs always that long on network shares? Had this on all backup servers, no matter omv, freenas, netatalk 2 or 3. What is osx doing that long???
ZFS only if you have ECC Ram, otherwise its quite useless. And you need more Ram (depending on your Storage size).
I went with XFS on my OMV machines. Had one Freenas with ZFS before, but after switching this machine to OMV i couldnt move the ZFS volume (due to different features in BSD and Debian ZFS implementations) so i avoided the hassle and use XFS now.
To sell their bigger machines....
Btw seems that the dsi_stream errors are gone for now with 3.17 (which afaik is a netatalk 2.2 bug)
SubZero: in my version most options are in:
afpd 3.1.7 - Apple Filing Protocol (AFP) daemon of Netatalk
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version. Please see the file COPYING for further information and details.
afpd has been compiled with support for these features:
AFP versions: 2.2 3.0 3.1 3.2 3.3 3.4
CNID backends: dbd last tdb mysql
Zeroconf support: Avahi
TCP wrappers support: Yes
Quota support: Yes
Admin group support: Yes
Valid shell checks: Yes
cracklib support: Yes
EA support: ad | sys
ACL support: Yes
LDAP support: Yes
D-Bus support: Yes
Spotlight support: Yes
DTrace probes: Yes
afp.conf: /etc/netatalk/afp.conf
extmap.conf: /etc/netatalk/extmap.conf
state directory: /var/lib/netatalk/
afp_signature.conf: /var/lib/netatalk/afp_signature.conf
afp_voluuid.conf: /var/lib/netatalk/afp_voluuid.conf
UAM search path: /usr/lib/netatalk//
Server messages path: /var/lib/netatalk/msg/
---
@wastl: My TM stuff was running without major problems for months. But a few days ago troubles started. So i turned to netatalk 3.x. And yes, its still running on my machines.
Didnt find much about it here.
But ... i did a permission reset, deleted the content of the backup folders. After this did backup from two clients at the same time, worked good. Hope it stays like that.
Have to look in the doku, but i enabled it in configure script.
Anyways, one small problem remains. Permission problem for the size check of TM sparsebundle file. Have to look into that.
But ... got a failed TM backup error again .. bummer
So, timemachine backups where running flawless. For now quite happy.
Didnt configure Netatalks 3 Spotlight, will do maybe later.
Btw. i think the build options from the maintainer are not complete, for omv there should be acl support and some more compiled in
Thanks!
Would be my small wish for 2.0 to have a option to disable
Hmm, this was around the 3.17 release. Not sure if the mentionend batches are in 3.17 already...
Anyways, tried with the 3.17 release, compiled fine, init scripts installed, running, works fine.
Blackmagic Speed test: 108MB write, 110MB read ... so far so good
Thanks for the info. Conversation was before 3.17?
Asking because 3.17 has already the new compile flags for debian (systemd/sysv).
Did compile the 3.17 and the 3.18dev (git) without problems...
P.S. VMs are for pussies .. compile on the production server ... better then latenight crime tv
Btw. how can i block the netatalk plugin, so its not selected by (user)error. Any way to do this?
(Dont want to override my manual netatalk 3 installation)
Is there a possibility to disable the annoying "Leave Page" Javascript Dialog?
It blocks the closure of the browser which is quite annoying sometimes (cancels system shutdown on client)
Ok, will try. Thanks.
Thinking about dumping the AFP Gui and installing Netatalk 3 manually...
On my Backup Box which runs since months there is a new problem with one client. Neither the OMV machine (1.19) nor the OS X (10.9.x) client got changed, no major upgrades.
Sometimes, and only sometimes the TS backup fails with the following error:
In the afp logs there are just the usual (dsi_stream) errors (which are a feature of the old netatalk 2.2 afaik):
May 24 14:47:27 bus afpd[28306]: AFP3.3 Login by nk
May 24 14:47:27 bus afpd[28306]: afp_disconnect: trying primary reconnect
May 24 14:47:27 bus afpd[2700]: Reconnect: transfering session to child[3656]
May 24 14:47:27 bus afpd[2700]: Reconnect: killing new session child[28306] after transfer
May 24 14:47:27 bus afpd[3656]: afp_dsi_transfer_session: succesfull primary reconnect
May 24 14:47:27 bus afpd[3656]: dsi_stream_read: len:0, unexpected EOF
May 24 14:47:27 bus afpd[3656]: dsi_stream_read: len:0, unexpected EOF
May 24 14:47:27 bus afpd[3656]: dsi_disconnect: entering disconnected state
May 24 14:47:28 bus afpd[28241]: afp_disconnect: primary reconnect succeeded
May 24 14:47:28 bus afpd[28312]: AFP3.3 Login by nk
May 24 14:47:28 bus afpd[28312]: afp_disconnect: trying primary reconnect
May 24 14:47:28 bus afpd[2700]: Reconnect: transfering session to child[3656]
May 24 14:47:28 bus afpd[2700]: Reconnect: killing new session child[28312] after transfer
May 24 14:47:28 bus afpd[3656]: afp_dsi_transfer_session: succesfull primary reconnect
May 24 14:47:28 bus afpd[28247]: afp_disconnect: primary reconnect succeeded
May 24 14:47:28 bus afpd[28252]: afp_disconnect: primary reconnect succeeded
May 24 14:47:28 bus afpd[28258]: afp_disconnect: primary reconnect succeeded
May 24 14:47:28 bus afpd[28264]: afp_disconnect: primary reconnect succeeded
May 24 14:47:28 bus afpd[28270]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28276]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28283]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28289]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28294]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28300]: afp_disconnect: primary reconnect succeeded
May 24 14:47:29 bus afpd[28306]: afp_disconnect: primary reconnect succeeded
May 24 14:47:30 bus afpd[28312]: afp_disconnect: primary reconnect succeeded
May 24 14:47:51 bus afpd[3656]: AFP logout by nk
May 24 14:47:51 bus afpd[3656]: AFP statistics: 460158.42 KB read, 266990.92 KB written
May 24 14:47:51 bus afpd[3656]: done
Alles anzeigen
Its on Ethernet, so no wifi issues.
Any ideas?
Thanks! Unfortunately a bug in the init scripts of debian makes life more difficult .. stopping the bt on omv host stops also bt in a container. For now (not enough time) i abandon the hybrid idea and use transmission just inside lxc (no enabled bt plugin in omv). Works fine with sickbeard and allows me for having ip based traffic shaping on the fw.
For more info see also here: HOWTO: PlexConnect with own IP via LXC
Btw. there is a bug in the start-stop-daemon / init scripts. If you have running lets say sshd on the host and inside the container you have also sshd running - "service sshd stop" on the host will stop also the sshd inside the container. This is because the pid is not handled by the init scripts and so a host init script kills all instances, even those inside containers (the container instances are also in the host ps list). Bug is known since 2011 (maybe before) and still unfixed ... bummer
p.s. used sshd just for example, didnt check if the bug affects this service