Do you know now, if it’s possible?
No, it's not possible.
Do you know now, if it’s possible?
No, it's not possible.
OK, did some testing and the .recycle dir is only populated with a file if it is deleted AND the recycle checkbox is checked.
There are more questions. Where is the file opened, in a global share or a specific home dir? Where is the unencryped archive located after you've opened it? In the shared folder or the home share?
Is it possible that you have enabled recycle bin and the behaviour you see is the result of your archive tool? Maybe it somehow duplicates the file for decryption and deletes it afterwards.
If there is a copy of your previously opened archive in the .recycle dir, then you MUST have enabled the recycle bin feature. Samba will surely not create this dir on its own.
Would you please tell us step by step how to reproduce this? What needs to be configured?
Hmmm, strange behaviour of Samba. To me the OMV generated smb.conf looks ok, no recycle bin is configured when the checkbox is not set. So this must be something internal to Samba. Have you checked the issue tracker of the upstream Samba project? Please note that OMV is only generating the configuration of the services it provides, everything else is out of the control of the OMV project. Bugs of the services used by OMV have to be reported at Debian or the upstream project itself.
This is a UI regression. The field should not be displayed in Edit mode. Fixed with https://github.com/openmediava…74fb4b8f9d6c2b81010dd9d37.
any news on this?
No. It will be done when it is ready. There is currently no big benefit in using Debian 12. One show stopper is Salt which needs to be adapted to Debian 12 before, otherwise you'll get spammed with Python warnings in systemd journal. And when Salt 3007 will be released is in the stars.
Thank you all - I was able to resolve with the guidance of ChatGTP
And how did the solution look like? This might be helpful for other users coming accross this post.
I think I can't do anything about it
That's true, first come, first served; that's how the kernel handles this for device names. That's why OMV is using predictable device names wherever possible. If the system (more precisely Udev) does not provide them, OMV can't do anything to prevent the problems that arise of this device name behaviour.
Gotcha, thanks. Is there a plugin for backing up user data on a schedule?
Rsync?
I forgot my OMV was visible on the internet with swag, so I was curious I tried a lot of time to log in with different wrong passwords, but neither OMV or fail2bafail2ban from swag prevent me to try to log in even after putting maybe 10 wrong passwords in a minute, anything to modify to prevent this behavior?
I do not know how fail2ban is implemented and which services are affected (because i do not use it), but OMV is using pam_faillock to lock users automatically out for all services that rely on the Linux PAM infrastructure (which includes OMV WebUI, SSH, FTP, ...).
Is there any downside to using the sharerootfs plugin?
No, except that you have to take care about your user data if you need to reinstall the OS. If you reinstall Debian/OMV/the OS image your user data is lost if you do not backup it before. That is the reason why OMV strictly separates OS and user data disks.
Every file systems occupies storage for internal use.
The dev/mmcblk1 device contains your root file system. Because of that OMV does not allow you to mount it. First it is already mounted and second reason is OMV does not use this file system with reason to separate user data from OS data.
If you want to overrule this behaviour, you have to install the openmediavault-sharerootfs plugin.
CodeDisplay Moreroot@omv2:~# blkid /dev/sdb1: UUID="af7f3797-5eaa-4c8b-9399-ad691e706847" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a575e392-db57-4078-97d8-6794c2bd0033" /dev/sda1: UUID="d157ce07-c902-4b55-b858-506ca944ea91" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="dd269d41-2988-4000-bf99-7a1d9e05cb62" /dev/sdc1: UUID="5d4371be-663f-47cf-8538-84f80d49ce1f" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="78fe20e0-01" /dev/sdc5: UUID="7e2d4bc3-d9db-4b75-88c6-0afc33647272" TYPE="swap" PARTUUID="78fe20e0-05" root@omv2:~# mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=3750232k,nr_inodes=937558,mode=755,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=757500k,mode=755,inode64) /dev/sdc1 on / type ext4 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64) cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15151) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) tmpfs on /tmp type tmpfs (rw,relatime,inode64) /dev/sda1 on /srv/dev-disk-by-uuid-d157ce07-c902-4b55-b858-506ca944ea91 type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group) /dev/sdb1 on /srv/dev-disk-by-uuid-af7f3797-5eaa-4c8b-9399-ad691e706847 type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group) /dev/sdc1 on /var/folder2ram/var/log type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/log type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/tmp type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/tmp type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/lib/openmediavault/rrd type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/lib/openmediavault/rrd type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/spool type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/spool type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/lib/rrdcached type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/lib/rrdcached type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/lib/monit type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/lib/monit type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/folder2ram/var/cache/samba type ext4 (rw,relatime,errors=remount-ro) folder2ram on /var/cache/samba type tmpfs (rw,relatime,inode64) /dev/sdc1 on /var/lib/containers/storage/overlay type ext4 (rw,relatime,errors=remount-ro) shm on /var/lib/containers/storage/overlay-containers/78300fe64abf6cc422103918a4660353502ba81e70823639e76a98832a6f6694/userdata/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=64000k,inode64) overlay on /var/lib/containers/storage/overlay/e5ce62dc58e0d1223dcac60bb204a0e6a507f789d2993717638b5fdac5d5a82e/merged type overlay (rw,relatime,lowerdir=/var/lib/containers/storage/overlay/l/VJYNMCPKP5MHO3BQWA3UCY4D7K:/var/lib/containers/storage/overlay/l/5PQT727EUACELPSMYG3EKSXFYH:/var/lib/containers/storage/overlay/l/33RHWQRTXYGF26RUBJURN72PKC:/var/lib/containers/storage/overlay/l/H4EVLPWW3F7A3XJ523HRG4QXF4:/var/lib/containers/storage/overlay/l/RIUUVR65BJY6UGMSMOKAUO44KB:/var/lib/containers/storage/overlay/l/ZMBVIEO3F2QRAAMMK372WRXX7V:/var/lib/containers/storage/overlay/l/YXSKBHIH57DIYGYT2I3WPOP3EX:/var/lib/containers/storage/overlay/l/I3FAXZLKUGHEEKAOMNTTQGTUBL:/var/lib/containers/storage/overlay/l/DMIRHY5XVMGYQ74J6N7RB2M3AU,upperdir=/var/lib/containers/storage/overlay/e5ce62dc58e0d1223dcac60bb204a0e6a507f789d2993717638b5fdac5d5a82e/diff,workdir=/var/lib/containers/storage/overlay/e5ce62dc58e0d1223dcac60bb204a0e6a507f789d2993717638b5fdac5d5a82e/work) root@omv2:~#
I'm a little bit confused. The screenshot in the first post absolutely does not match the output of blkid in your last post.
Could you please post
This is done by OMV, see https://github.com/openmediava…eploy/cron/05settings.sls