do yourself a favor and get a UPS...
ryeco knows a lot more than me, but I was going to say the same thing.
I have UFS configured in /etc/fstab and do not use the plugin GUI, so that's another option.
I use to reboot it without issues before,
Anyway how to configure it to automount?
I can think of several ways:
you can use auto setting with systemd mount in fstab
you can configure mount as a systemd unit and use automount setting
you can use autofs
This line is not a comment
UUID=d8bb4ceb-0689-4a76-a669-b10fcfc5336f / ext4 errors=remount-ro 0 1
No i am mounting the NFS share on a different omv
Use to work be fore i diconnected some disks not related to the share.
That line mounts your root drive and is fine. Don't mess with it unless you want to make your machine un-bootable. If you reboot an NFS server or unmount the share location, you need to remount the NFS on the client unless you have fstab configured to automount.
The "swap on" line is not an error but merely a comment in fstab.
Are you creating nfs shares on omv and then mounting them on the same omv?
I'm no expert but I would think containers could be moved? I have OMV running in a VM on proxmox, and my VPN (wireguard) is running in a discrete VM on proxmox. Also have another VM for the DNS server (pihole). I like having the other services separated from the NAS server.
You don't "move" it.
1. Install Proxmox on bare metal
2. Create VM for OMV and install it on Proxmox
When I migrated I also changed to a new box, so I had very little downtime. Just during the time it took to move the data drives to the new box and get them added into OMV.
could be a permissions issue within glftpd. glftpd users are separate from system users
No this did not work in combination with Glftpd. The bind mount is visible in CLI but not when dir listing in glftpd. Dont ask me why.
I had the same idea.
So when a user logs in to the FTP they couldn't list the directory of the bind mount unless you mounted with fuse?
Wonder if it's permissions issue?
Disk failure is always an option.
why not configure it in /etc/fstab? Just do it outside of the region controlled by omv. Also, why fuse?
Doesn't this work?
/some/where /else/where none bind 0 0
can also use the mount command,
mount --bind /some/where /else/where
OMV doesn't let you create shared files on the rootfs.
Correct, if you pass through the controller you will have SMART monitoring in OMV, although that can be accomplished in proxmox if needed. If you pass the disks through, you can have a maximum of 6 SATA disks and 14 SCSI disks. Passing through the controller means your NAS will be able to have as many drives as the controller supports.
If you pass the controller through to a VM, then it will only be available for use by that VM.
I would not do hardware raid.
I don't know... OMV 4 would imstall without any issues. Proxmox (or athore virtualizing software) n was the first thing I thought. But I found it of difficult implementation.
I guess I should pass through the hard disks to omv
I was stuck on two things. First what would happen in case of losing the os disk. I did some testing and recovering the VMs is difficult
Second I couldn't manage to get a fixed ip address for omv. On bare metal I can force the dhcp to assign it the same address.....
I don't know what to say, I think recovering VM's in proxmox is super easy assuming you have the images backed up on another drive. (Another popular option many have done for years now is to run OMV from a USB flash drive, making it easy to backup and replace should they experience a hardware failure.)
I have configured VM's as static or dhcp with no issue either way. It's pretty seamless. At some point I want to get link aggregation setup.
The most difficult part of running virtualized OMV (and this is specific to my setup) was configuring PCI passthrough for the SAS HBA that all the data drives are connected to. Passing individual disks through to VM's is much easier but I desired that OMV should see the HBA.
That's weird. Does your server have a weird network adapter that is not natively supported? Are you not using DHCP on your network? I still think OMV would serve you well. Myself, I run proxmox on the bare metal and OMV as a virtual machine for NAS. Other servers are run as separate VM's. Many folks have good success running separate services using Docker in OMV.
ok so if you installed using omv iso, you couldn't ssh to it? Did you have a monitor hooked up for the install, so you could login directly?
Thanks for your replies. I know that parity disk has to be as large as the larger. I read that I could setup the two 500 gb plus one of the 3tb and 1 3tb for parity. But to me doesn't sum up unless it stops backing up at some point.
In reality I wanted to build a rig with an Intel j5005 and 16 gb of ram at most. But in the meantime I was reading about freenas... and zfs... and it's need of ram and that should have been ecc ram... So I bid over this dell t320 for joke and it turned out I won the auction (211 euros)...
So what can you tell me? as far as I am concerned OMV will tick all the boxes. or should I go some other route maybe just plain debian with xfce? Clrearos?...
two 500gb drives + 1 3TB with 1 3TB for parity will work just fine using snapraid. Drives are cheap enough though I'd probably pitch the 500's and get some more 3 TB.
I think OMV will be perfect for you. Why would you need xfce? OMV is a file server and designed to run headless.