dumb question: does omv5 still have the omv-update command? If not, is there a replacement? I like to manage my system via terminal.
I've done this before, just take lots of screenshots like gderf mentioned. When you add the drives to your new install everything should be just as it was before.
All the time. I do have a cluster setup but I make backups whenever I make big changes that I may want to revert to at a later date.
Yeah i think upgrading from 4 to 5 is a pretty big change. I might do both a backup and a snapshot, even though it's going to go smooth without a hiccup, right?
I'm not worried about losing my data but I don't feel like starting over from scratch on my OMV install. I recall it took some fiddling to get the PCI passthrough for my HBA to work like it should.
You can. If you do, when you restore to a snapshot, the VM will be running. Some apps may not like the gap in time. So, I typically don't do this.
Cool. I made a few snapshots and noticed it's a lot faster if you don't snapshot the ram.
Do you ever use the full backup feature? I've used backup images to replicate VM's on another node. (manually, as i'm not running a cluster)
Actually, I just mean a snapshot not backup using snapshot. You take a snapshot before you make changes. If they go bad, you click Rollback in the Snapshot tab of proxmox and start your VM. Done. No restore needed. I have done this thousands of times on proxmox and VMware.
OK I see what you mean now. Do you check the box to include the ram? Can you snapshot and rollback with the guest running?
It shouldn't as long as the install isn't too custom.
Just take a snapshot. If the upgrade fails, revert to snapshot. It won't snap the passthrough drives though.
The install is not custom at all. I know the proxmox snapshot is only for the OS. I backup all my VM's prior to any updates (snapshot mode) but I've never had to restore one yet. Thanks
I'm running omv4 w/pve kernel as a guest on proxmox 6. Can I use this script to upgrade to 5? Does it matter that I originally installed debian and then OMV on top of it? Since I'm running a VM, I assume if anything gets screwed up I can just restore the VM from backup. Even so I might clone the VM and try running the upgrade on the clone first. Although the clone won't be using the pci passthrough HBA card. The only plugins i'm using are resetperms, unionfilesystems, and snapraid. Finally, do I need to unmount all the data drives before running the upgrade?
I'm not entirely sure what determines snapraid sync speeds, but here's my experience: the more drives you have in the pool, the faster it goes. But also more drives mean that more data is synced. For example if you have 3 drives and add a 100MB file, it will sync 300 MB. I'm up to 9 drives in my pool and the sync speeds are often >1000 MB/s and I use WD reds and whites so I know the drives aren't that fast. If I add a 100MB file, it will sync 900MB.
Now, it's been years since I did an initial sync. That might just be a slower process and you need to wait it out. Also, are you using pre-hash? I think your 53MB/s is probably just the hashing step. Then when that completes and it does the sync step, it will go faster. The pre-hash is purposely slower.
From the manual:Quote
-h, --pre-hash In "sync" runs a preliminary hashing phase of all the new data to have an additional verification before the parity computation. Usually in "sync" no preliminary hashing is done, and the new data is hashed just before the parity computation when it's read for the first time. Unfortunately, this process happens when the system is under heavy load, with all disks spinning and with a busy CPU. This is an extreme condition for the machine, and if it has a latent hardware problem, it's possible to encounter silent errors what cannot be detected because the data is not yet hashed. To avoid this risk, you can enable the "pre-hash" mode and have all the data read two times to ensure its integrity. This option also verifies the files moved inside the array, to ensure that the move operation went successfully, and in case to block the sync and to allow to run a fix operation. This option can be used only with "sync".
I always use sync with pre-hash.
I use Q35 machine type and bios boot and it works great. My LSI HBA is passed through as well.
That is true. The plugin would overwrite any changes. But why administration via the terminal? What is the plugin missing?
I configure everything using the plugin, but prefer to run all syncs and scrubs via terminal. I can detach from and re-attach to the terminal session as desired.
I run mergerfs configured by hand, not with the plugin. I use the SnapRaid plugin but the only time I use it is when I add a new drive which is about once a year. I could do that by hand as well.
I configure mergerfs by hand in /etc/fstab as well due to plugin only configuring on the drive level, not the share level.
I don't think it's possible to configure snapraid by hand since the plugin owns the configuration file? So i just use the plugin to make configuration changes - all other snapraid administration is done via terminal.
Best setup is whatever works for you. I run all my snapraid jobs manually so I know it won't sync when it shouldn't.
I always run snapraid diff to make sure a bunch of files aren't missing or got wiped out, then run the sync.
Then periodically run status and do a scrub if needed.
Others do it differently and it works for them.
My recommendation would be to discard OpenVPN and try Wireguard.
Err....I did look at the journal by typing in: journalctl -xb
But the output has got 1,633 lines, what error message do I need to look for? I have no idea.
Look for something related to your disks. Maybe zip it up and attach it here so others can see it.
No, actually I using Win10 PC as host for a virtual machine.I run OMV 5 within Win 10 OS, thru Virtual Box.
Right, and within vbox, you can use bios boot or EFI for the virtual machine.
Point being, if it's EFI, I don't think the update-grub command would work.
I would have looked at the journal to see what the issue was. (As stated by the emergency mode message)
Something else to try - select the box to say that the drive is hot-pluggable?
You are using bios boot, right?
Of course it will work but the consensus these days is to shy away from hardware raid. The hardware is a point of failure.
I've ran bare metal OMV and now virtualized OMV on an intel 600p M.2 drive and seen no wear based on smart data.
SO I am struggling with what power supply to get. I have an SSD and 8 storage drives, so I need a power supply with enough modular rails to support all of that. Can anyone recommend a good one? I looked into the ones mentioned here but I didn't see any that had enough rails to support that many drives.
@jollyrogr, the case I am using for the build is: https://www.amazon.com/gp/prod…_asin_title?ie=UTF8&psc=1
Edit: Did find these 2 that would support 10x SATA. Thoughts? The Platinum version is $20 difference from the Gold, is the Platinum worth the extra money, or is the gold version enough for my needs?
I was away and you probably have it figured out already, but with that case using ATX psu, I would just use a quality modular ATX psu. I have an EVGA in my desktop PC that works very well.
The user guide, which is linked in my signature, will be helpful for you.
I thought you were nuts until I found that there is a board setting to display signatures. I think default is to not display.