I tried both and both throw the same error. Even a brand new folder and new SMB has no write permission.
Beiträge von MarcS
-
-
I am having trouble with OMV folder permissions on a filesystem created by Synology.
OMV can mount and see everything fine but when I create Shared Folders + SMBs , the permissions don't allow for write access from a remote host.
Only after I run chmod 02775 on Foldername, the respective SMB becomes writeable.
Why is that? Do I have to manually run chmod on every folder I want to share? I thought creating a shared folder takes care of that.
This is the setup:
OMV1---Folder1 (shared folder)----SMB---+++++++++++---OMV2
-
Use this instead: du -d1 -h -x /var/ | sort -h (the -x will not check the space of other filesystems). A quick look of your other output tells me it is mostly in /var though. Probably logs, docker, or plex.
thanks. This command saved my day. I had my 32GB root fs at 99% !! Nervous .....:)
-
I have a NAS with 2 disks in Raid1 full of data.
Has anyone managed to migrate a Linux Raid1 to a Raid5 setup by adding 1 disk, while the data on the 2 original disks is retained? Theoretical this should be no problem but the internet is somewhat inconclusive so I was wondering if anyone has actually done it and all data was intact.
thanks!
-
as far as I understood the debian text Avahi provides hostnames independentent of any DNS service. So it could be cresting errors when Regen forces it to name the new host (using the same hostname) while the old host is still connected on the network. This was my case. The old host remains on the network.
-
OK. So far it works. I did not have anything special in my old network config...
Do you know what the module avahi does? And how I could test it?
-
OK. So far it works. I did not have anything special in my old network config...
-
BTRFS is great in RAID1 as it is stable and can easily be expanded later if you want to use bigger disks.
ZFS in RAID5 is super stable and easy to recover. But in day to day running it has a lot of overhead so if performance is key then its not the 1st choice although you can improve speed a bit by adding SSD cache.
Ext4/Xfs + Mdadm is stable in all Raid configurations but doesnt have the newer filesystem features like Snapshots, Scrubbing and Wow to prevent bit rot (as BTRFS and ZFS do have).
So like everything in life its a trade off.
-
OK. So all Docker Containers up and working.
I must say I am very impressed by OMV-Regen. It saved me a ton of time and did 90% of the work automated. A big thank you for putting this together !! A massive tool.
I will monitor my system but so far all seems stable. Maybe the network thing needs some looking at but considering that I am running an email-server and so far it works fine after the migration, its pretty great.
-
The only thing that omv-regen does not configure if you select not to configure the network is interfaces
Ah I see. I think things like the host name should also not be configured if network is de-selected.
There was def a network issue as I could not reset the interface or get an IP via omv_firstaid.
Maybe when you have time and have an idea where I could reset or re-test this avahi module. I am not knowledgeable about this.
-
Phase 7: here are some errors
Code
Alles anzeigenRegenerating node /config/system/network/dns of the database Formatting database Reading the value of /config/system/network/dns in original database... Reading the value of /config/system/network/dns in actual database... Regenerating /config/system/network/dns... Creating temporary database... Deleting current /config/system/network/dns node... Copying original dns tag... Moving dns to /config/system/network... /config/system/network/dns node regenerated in the database. Applying configuration changes to salt modules... Configuring salt avahi... [ERROR ] Command '/usr/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-rd760474dad004213af54b57aa1267109.scope Job for avahi-daemon.service failed because the control process exited with error code. See "systemctl status avahi-daemon.service" and "journalctl -xe" for details. [ERROR ] retcode: 1 [ERROR ] Job for avahi-daemon.service failed because the control process exited with error code. See "systemctl status avahi-daemon.service" and "journalctl -xe" for details. Salt module avahi configured. Configuring salt hostname... Salt module hostname configured. Configuring salt hosts... Salt module hosts configured. Configuring salt postfix... Salt module postfix configured. Configuring salt samba... Salt module samba configured. Configuring salt systemd-networkd... Salt module hosts configured. Configuring salt postfix... Salt module postfix configured. Configuring salt samba... Salt module samba configured. Configuring salt systemd-networkd... [ERROR ] Command 'netplan' failed with return code: 1 [ERROR ] stderr: Job for systemd-networkd.service failed because a timeout was exceeded. See "systemctl status systemd-networkd.service" and "journalctl -xe" for details. Traceback (most recent call last): File "/usr/sbin/netplan", line 23, in <module> netplan.main() File "/usr/share/netplan/netplan/cli/core.py", line 50, in main self.run_command() File "/usr/share/netplan/netplan/cli/utils.py", line 257, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/apply.py", line 55, in run self.run_command() File "/usr/share/netplan/netplan/cli/utils.py", line 257, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/apply.py", line 232, in command_apply utils.systemctl_networkd('start', sync=True, extra_services=netplan_wpa + netplan_ovs) File "/usr/share/netplan/netplan/cli/utils.py", line 125, in systemctl_networkd subprocess.check_call(command) File "/usr/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['systemctl', 'start', 'systemd-networkd.service', 'netplan-ovs-cleanup.service']' returned non-zero exit status 1. [ERROR ] retcode: 1 [ERROR ] {'pid': 53588, 'retcode': 1, 'stdout': '', 'stderr': 'Job for systemd-networkd.service failed because a timeout was exceeded.\nSee "systemctl sta> Salt module systemd-networkd configured. Salt module configuration completed. Regenerating node /config/system/network/proxy of the database Formatting database Reading the value of /config/system/network/proxy in original database... Reading the value of /config/system/network/proxy in actual database... /config/system/network/proxy node matches original and current databases --> The database is not modified and no changes are applied to salt. Regenerating node /config/system/network/iptables of the database Formatting database Reading the value of /config/system/network/iptables in original database... Reading the value of /config/system/network/iptables in actual database... /config/system/network/iptables node matches original and current databases --> The database is not modified and no changes are applied to salt. [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [ERROR ] Command '/usr/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-r9995a37fe0d245ddb35ba0540cf7e3be.scope Job for avahi-daemon.service failed because the control process exited with error code. See "systemctl status avahi-daemon.service" and "journalctl -xe" for details. [ERROR ] retcode: 1 [ERROR ] Job for avahi-daemon.service failed because the control process exited with error code. See "systemctl status systemd-networkd.service" and "journalctl -xe" for details. Traceback (most recent call last): File "/usr/sbin/netplan", line 23, in <module> netplan.main() File "/usr/share/netplan/netplan/cli/core.py", line 50, in main self.run_command() File "/usr/share/netplan/netplan/cli/utils.py", line 257, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/apply.py", line 55, in run self.run_command() File "/usr/share/netplan/netplan/cli/utils.py", line 257, in run_command self.func() File "/usr/share/netplan/netplan/cli/commands/apply.py", line 232, in command_apply utils.systemctl_networkd('start', sync=True, extra_services=netplan_wpa + netplan_ovs) File "/usr/share/netplan/netplan/cli/utils.py", line 125, in systemctl_networkd subprocess.check_call(command) File "/usr/lib/python3.9/subprocess.py", line 373, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['systemctl', 'start', 'systemd-networkd.service', 'netplan-ovs-cleanup.service']' returned non-zero exit status 1. [ERROR ] retcode: 1 [ERROR ] {'pid': 71640, 'retcode': 1, 'stdout': '', 'stderr': 'Job for systemd-networkd.service failed because the control process exited with error code.> [ERROR ] Command 'timedatectl' failed with return code: 1 [ERROR ] stderr: ^[[0;1;31mFailed to query server: Failed to activate service 'org.freedesktop.timedate1': timed out (service_start_timeout=25000ms)^[[0m [ERROR ] retcode: 1 [ERROR ] An exception occurred in this state: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/salt/state.py", line 2401, in call ret = self.states[cdata["full"]]( File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 149, in __call__ return self.loader.run(run_func, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1234, in run return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1249, in _run_as return _func_or_method(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/loader/lazy.py", line 1282, in wrapper return f(*args, **kwargs) File "/usr/lib/python3/dist-packages/salt/states/timezone.py", line 70, in system
-
I assume you were able to start the regeneration despite the warning about the order of the hard drives.
yes correct. The warning was there but still worked.
-
OK. I managed to reboot after which the drives unlocked and Shared_Folders are all listed. I haven't tested anything yet but at least Shared_folders are in the database.
I managed to fix the network issue manually. I still think there is a bug inside OMV-regen as it should have not configure my network.
All seems to be there now apart from Docker containers: Since migrated the server from ARM->X86, the containers wont come up. I guess I need to recreate them all....
Still testing stuff and working on Docker but yes the LUKS needs a reboot prior to OMV-Reconfig doing all the mounting work.
-
Unfortunately it didn't work. I think there were 2 issues:
1) LUKS: omv-regen is trying to mount stuff when the disks are still locked. Crypttab was copied over but it needs a reboot for the unlock to be activated. Can omv-regen reboot and continue its process?
2) Network setup issue: omv-regen did a full Network re-configuration although I specifically de-selected network configuration. I suddenly saw all the Docker networks on the host and could not get a host ip address.
That screwed the entire system up and the network could not be reset with omv-firstaid.
-
OK. Many thanks.
I will try.
Thanks a lot.
-
You mean the path in crypttab definition containing spaces? That's very unlikely as users manually create crypttab and they will choose simple paths.
-
Crypttab
If I understand correctly the third field of the crypttab file is always the path of the file with the key. This is so?
If this is true I can solve it easily. I would simply have omv-regen copy those files to the backup and then to the new system in the original path.
Edit: The problem would be if there is a route with spaces. I understand when ryecoaaron complains about this.
yes exactly.
copy crypttab
copy each key to the same folder as in old crypttab
you also need to allow for change of disk order because this can change in the new system
-
I can't find information about crypttab on the official luks website. https://gitlab.com/cryptsetup/…tup/blob/master/README.md
-
Here is an example of the crypttab format. You can have a separate key for each disk or one key for all disks.
Code# <target name> <source device> <key file> <options> sda-crypt UUID=xxxxxx /PATH/key1 luks sdb-crypt UUID=xxxxxx /PATH/key2 luks
The UUID is the unencrypted name of the disk. Thats what is later mounted and used by OMV and shown under /srv
-
the key (which is a file) acts instead of the password. You can unlock the Luks drives with either one.