Posts by UKenGB

    Well I have to say, that was almost too simple.


    Upgrading from OMV 7 to 8 simply ran without any problems, but now I'm searching through it trying to find what went wrong that I didn't catch. Not found anything yet.


    It was indeed surprising to me as from 6 to 7 was something of a nightmare, so this was a very pleasant surprise.


    I know it's so annoying when you're having terrible problems and others are smugly saying "all works great for me" and I'm normally in the former group, but very happy to be in the latter group this time. Well done to those who put together this upgrade.

    To clarify, Time Machine operates differently when backing up to a network share compared to local storage. In the former case it places everything in a sparsebundle disk image, but in the latter it uses regular files and folders, so moving a drive between those 2 'connections' simply would not work.


    The actual disk format of any network storage is transparent to the 'client'. So whether OMV storage uses ext4, XFS, etc makes no difference. TM is simply operating over SMB and knows nothing about the disk format used for that storage. However, that can affect how the files might be treated which is why TM uses a sparsebundle in those situations as that 'disk image' can be created as an APFS volume which TM knows how to deal with and make use of features it requires such as snapshots or hard linking directories, while being abstracted from the actual disk format of the storage and even the network protocol to a certain extent.


    TM's sparsebundle is a lot of relatively small files for which ext4 is ideally suited and seems to be the recommended format for TM storage on a network share.

    This concerns my OMV server which is a tower PC powered by Intel i9 with 32 GB RAM, 6 x 3.5" HDs, 2 x 2.5" HDs and boots OMV 7.7.17-1 (Sandworm) from a 120 GB NVMe M.2 (with 90 GB still free).


    Having updated from OMV 6 -> 7, I duplicated a data disk in order to replace the source with the destination. All this was completed and the OMV server then ran for several days without issue.


    I then added the second 2.5" HD, a brand new Seagate 2TB (to use for Time Machine backups from several Macs), added an ext4 filesystem on this disk and was first surprised by that being such a slow process, writing lots of inode numbers on the screen. Most of my disks use XFS filesystems which are created almost instantaneously and I don't recall ext4 creation creating all this output or taking so long.


    Anyway, I set it up as a shared folder and configured it for SMB - with Time Machine support.


    On a Mac, in TM settings, I was able to select that OMV share and started the backup. All looked good.


    Until it failed part way through, complaining "The backup disk image could not be created."


    On the OMV server, that disk was now for some reason effectively read only, even though not displayed as such anywhere. I was able to enter the TM folder and ls showed the list of files and folders, but I could not create anything in CLI. Trying to create a file or folder just resulted in an error about it being read only.


    Now it gets weirder.


    The other 2.5" HD was in an even worse state. When trying to simply ls, it took ages to finally produce the error ls: reading directory 'OMB/': Input/output error


    That disc (called OMB) has been in use for a couple of years, used for OMV system backups, no problem. Now after an attempt to use a completely different HD as a TM backup target, that fails, leaves its drive unwritable and this one also apparently completely borked.


    I futzed around with both these drives for some time, but was unable to get anywhere with them, both seemingly completely failed. Actually I unmounted OMB (the OMV backup HD, not shared in any way), but was then unable to remount it as OMV couldn't see it and mounting from the CLI failed, saying it couldn't be found.


    As a last resort, I rebooted the OMV server and once restarted, both HDs were magically available again with no apparent problems. I restarted the TM backup on the Mac and it completed without error.


    That was yesterday. Today, I ran the TM backup again on the Mac. It got part way through and failed again for exactly the same reason as yesterday. Checking on the OMV server and in the GUI, the OMV backup disk (OMD) appears ok, but the TM target disc is now 'Missing'. In the CLI, trying to use ls on OMD produces the same 'Input/output' error as yesterday.


    I'm speculating that I can now do nothing with these disks until I reboot the OMV server.


    So I'm rather wondering what on earth is going on. Not only is trying to perform a Time Machine backup to a TM enabled OMV SMB share causing that disk to bork, but it is also borking another HD whose only connection is they are both 2.5" and reside in the same dual HD bay, but with separate connections to different SATA ports, as far as the OS is concerned, they are not connected in any way. Oh, also they're both ext4, although so is the boot drive so doubt that's the cause of anything and neither are even half full. There's plenty of space on everything.


    Can both these 2.5" HDs have failed simultaneously? Unlikely and twice now, the OMB failure (while doing nothing) is coincident with the TM backup failure and both apparently ok after a reboot, which rather suggests it's not a hardware problem.


    Sorry it's a bit long, but just trying to explain it all.


    Anyone shed any light on what's going on here?

    Ha, ok so we're both oldies with similar ages of experience. :)


    I agree docs should be written for noobs to understand and I accept that was not aimed at me.


    I'll say again, I am truly appreciative of the work you do. OMV is fantastic and enormously useful. So kudos to you both. :thumbup:


    Doesn't mean it won't frustrate me occasionally though. ^^

    Making some changes to my OMV config after upgrading to v7 and noticed the mergerfs volumes shown in Storage/File Systems had Labels but no Tags.


    I have come across some discussion about the use of Labels with the consensus from OMV cognoscenti being that Labels are superfluous as OMV uses UUID.


    Despite that, I like Labels, but also like to use Tags so I can easily identify a volume at a glance. I realise neither are relevant to system identification for mounting etc, but it helps visually and as I said, OMV itself displays Labels of mergerfs volumes (created presumably from the mergerfs Name).


    However, when I selected one of the mergerfs filesystems and tried to edit it in order to add a Tag, the Edit icon is disabled/greyed out. I then clicked the inverted triangle (Show details) icon and was greeted with a big red error announcing it is not a device. True, it's not, but…


    Made me wonder why the GUI allows you to click an icon/button that simply shows a big error, yet disables the Edit button that would allow you to set a Tag - which incidentally is perfectly possible by editing its mntent entry in config.xml.


    So the advice is to not use Labels, but in the GUI, mergerfs volumes show Labels and Tags cannot be added which is somewhat mixed up.


    I have actually achieved all I needed, but seems to me it would be better if for non device filesystems the 'Show details' button was DISABLED and the Edit button ENABLED so the user IS allowed to edit that filesystem entry to add a tag there rather than having to edit config.xml.


    Just a suggestion. :)

    Thanks for qualifying me as a noob when I was dealing with Unix possibly before you were born, but I'm not a mind reader and cannot second guess what a 'developer' meant or intended. Terminology is often inconsistent and e.g. 'cloning' actually means making an identical copy. It does not specify the means of doing that and in fact is often used in the context of computer drives to describe the process of copying FILES to a different disc.


    I did also ask about disk sizes as dd cannot directly clone to a smaller disk. In the absence of any response to the contrary, I tried it anyway to see if this plugin gracefully deals with that, but after many hours I can confirm that no, this plugin does not overcome that limitation, provides no warning of this and simply produces an unusable disk.


    No matter, I wanted to confirm either way and will now use a different method of copying the drive to another and this thread will hopefully prove useful to others attempting to fathom the operation of this plugin.


    None of this means I don't appreciate the work put in to make OMV what it is. Sometimes though, digging out little nuggets of important information is more painful than it needs to be.

    Thanks, yes understand that, but the intended destination had just been erased in preparation for this. However, I also created an XFS file system on it and that is what prevents the disk appearing in the destinations pop up. Wiping it again allowed it to appear and cloning now proceeds.


    However, the entire process is not clear. I get the need for safeguards to prevent inadvertent loss of data and once one knows, it seems clear enough, but when first presented with a disk copying procedure that lists no possible Destination it is rather obscure and could so easily have been made more understandable with some basic explanation on screen.


    Thanks anyway for prompting me to realise having a filesystem on the disc was the problem.

    Jumping in to this as I want to clone a data disk to another disk. Both disks are correct and seen by OMV, the destination being completely empty. When I enter Service/Disk Clone, I can select the source drive, but when I click on destination, nothing happens. No popup from which to select the destination, even though it is all in red saying it is required.


    So something is screwy. Is it possibly that it requires the destination to be exactly the same size as the source? In my case it is only half the size, but still 3 times larger than the data actually on the source.


    I could manually set up rsync, but an easy to use OMV plugin seemed the obvious way to do it. Why is it failing so spectacularly in not allowing me to specify a destination?

    The reason I thought that is that the docs for v7 state the ext4 defaults as:-


    Code
    OMV_FSTAB_MNTOPS_EXT4="defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0"

    which I think is the deprecated old external file method.


    However, I just remounted that drive and its new entry in fstab is now:-


    defaults,nofail,user_xattr,usrquota,grpquota,acl


    so just the newer scheme and not the older external file system that has been deprecated. It is possible that disc was originally set up when first installing OMV, possibly even before v5 so probably a hangover from that. I guess unmounting and remounting is probably all that was needed.


    Live and learn. ^^

    I rebooted in the hope the error was not serious but no obvious change.


    I tried the 6-7 upgrade fix script which also seemed to encounter errors and resulted in no improvement.


    I then ran omv-upgrade in the hope that it would complete the process with all packages.


    Took an age to run but finished eventually and I rebooted.


    That all seemed to go well and I can log in to the GUI which shows that I am now running 7.7.17 and no errors showing.


    Which all looks good, but I've no idea why the process was so fragmented with multiple failures. Doesn't leave me feeling confident about any future upgrade, e.g. to OMV 8, so if anyone can shed any light on what was going on, it would be much appreciated.

    Having checked all I could, I ran omv-release-upgrade. Seemed to go well until:-


    Code
    Selecting previously unselected package linux-image-6.1.0-39-amd64.
    Preparing to unpack .../18-linux-image-6.1.0-39-amd64_6.1.148-1_amd64.deb ...
    Unpacking linux-image-6.1.0-39-amd64 (6.1.148-1) ...
    dpkg-deb (subprocess): decompressing archive '/tmp/apt-dpkg-install-kxR5XM/18-linux-image-6.1.0-39-amd64_6.1.148-1_amd64.deb' (size=68389040) member 'data.tar': lzma error: compressed data is corrupt
    dpkg-deb: error: <decompress> subprocess returned error exit status 2
    dpkg: error processing archive /tmp/apt-dpkg-install-kxR5XM/18-linux-image-6.1.0-39-amd64_6.1.148-1_amd64.deb (--unpack):
     cannot copy extracted data for './lib/modules/6.1.0-39-amd64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko' to '/lib/modules/6.1.0-39-amd64/kernel/drivers/net/ethernet/intel/i40e/i40e.ko.dpkg-new': unexpected end of file or stream
    Preparing to unpack .../19-linux-image-amd64_6.1.148-1_amd64.deb ...
    Unpacking linux-image-amd64 (6.1.148-1) over (6.1.90-1~bpo11+1) ...

    Not sure where the error started and ended so grabbed a couple more lines, but the error is clear to see and the process ended with:-


    Code
    Errors were encountered while processing:
     /tmp/apt-dpkg-install-kxR5XM/18-linux-image-6.1.0-39-amd64_6.1.148-1_amd64.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)


    Seems to imply corruption in a downloaded file or am I misreading that?


    The machine is still running and I have ssh access, but cannot access the GUI.


    Anyone suggest where to go from here? Please?

    Nice and simple. Thanks.


    I've unmounted that drive now, ready to run the upgrade. I'll add that plugin once the upgrade has finished and before I mount that drive.


    Still a puzzle though why the latest version of OMV defaults to using options that are deprecated.

    Preparing to upgrade OMV6->7 and as suggested first ran omv-salt stage run deploy which resulted in 2 fails:-


    ext4 quota error due to OMV using deprecated external file quotas and not built-in ext4 quotas and about which I opened a thread.


    Then there's this one:-


    This seems to imply that omv-salt stage run deploy is actually issuing an invalid command. Why would it be doing that and how can I prevent it? Can it be edited? Should that command not be run or a different command instead?

    I'm investigating updating OMV6->7 and ran omv-salt stage run deploy which produced a lot of output including a failure regarding EXT4 quotas and the following error:-

    Quote

    quotacheck: Your kernel probably supports ext4 quota feature but you are using external quota files. Please switch your filesystem to use ext4 quota feature as external quota files on ext4 are deprecated

    The drive in question is a backup drive and needs no quotas, so I turned off quotas, but fstab and mntab still contain the external file options and even after a reboot omv-salt stage run deploy still has the error.


    What's the best way to remove those deprecated options so no more quotacheck errors?


    Looking further into this, the docs for OMV7 still show the same default mount options for ext4 which means the error will persist unless OMV config is manually edited. Should the latest version not have these defaults modified so there is no error?

    The admin user is only used for web UI login and does not get any emails.

    So I guess it is a software that has a UI and the user admin that is sending emails to you. You did never mention what the content of this emails is. It might help to identify the creator of those.

    I have email notifications setup to go to admin@home (which works) but these emails were being sent to admin@macserve.home that I may have set up in OMV years ago, but switched to @home as @macserve.home did not work. So I'm wondering if this loop has been going on for years.


    Unfortunately I was unable to actually look at any of the messages to help discover their origin.


    Anyway, in my attempts to reconfigure postfix on the server, a small change to test something caused 517 emails to suddenly be delivered and the loop stopped. They were rejected mail error messages that have obviously been looping around for quite some time. Anyway the loop was broken and the network as as quiet as it should be.


    Also, I have now been able to modify postfix to accept mails to user@macserve.home, so there should never be another loop.


    Now I can get back to trying to figure out why my auto backup function is not working as it should and not sending me notification emails, which is what caused me to discover the loop. :)

    Thanks but 'nut' wasn't something I remembered and on checking it is not installed.


    If I could find a way to see what the email is that is being sent, it should give me an idea of what is trying to send it. In any case, what is possibly trying to send a message every 7 seconds. Nothing should be creating that much traffic.

    i've used postqueue -p to show all the messages in the queue, then postsuper -d all to clear the queue, then postqueue -p to hopefully see an empty queue, but they're all still there.


    I rebooted the OMV server and again, something is trying to send an email every 7 seconds. WTF is going on?

    OMV has been running unattended for some time without apparent issue, but today I logged in to check a few things, ran the expected updates and all seems good. However, the mail.log has been and still is going crazy trying to send an email to admin@myserver.home, which is bounced (that address doesn't work), but it then repeats the attempt 7 seconds later. And 7 seconds after that and so on. It is being sent From admin@omvserve.home which would be correct, but what the hell is it trying to send to my local email server?


    OMV Notifications are configured to email admin@home and this works. In fact mergerfs' weekly report is correctly delivered to admin@home, so looks like OMV configuration is basically ok. However I have a problem with something trying to send an email To admin@myserver.home and I cannot find that address anywhere in the configuration, but it's From admin@omvserve.home which implies it emanates from OMV.


    What in OMV tries to send an email every 7 seconds?

    No, NFS, but that's not relevant to the problem. The filesystem contains media files, added to by Plex and also a user specifically for that. To keep it all running smoothly, everything needs to be group writable and although I can manually adjust existing files, anything that gets added is wrong and I have to manually adjust. Ok when it's me adding the content, but when Plex is recording, I don't know when it's taking place and trying to ensure correct perms gets harder and harder. So I just want to ensure anything that gets created is created with the required perms. That would make my life a lot simpler.


    However, as I mentioned, these are XFS filesystems and as far as I can determine, XFS does not support a umask mount option, so I'll have to do it by user instead. Not absolutely ideal, but acceptable and more important, doable.


    One is actually a mergerfs merge and I added a umask option to that mount as it was supposed to work (according to trapexit), but it gets it all wrong and applies the same perms to both files and folders, including all existing ones and then will not allow any changes. So I've reversed that.


    As I said, setting umask for the appropriate users will do.

    Ok thanks. I thought modifying the config files was no good as the changes would be lost.


    I'll give that a try, Would be great to have it in the GUI though. :)

    Sadly it's not gonna work as I now realise that XFS does not allow a umask mount option and the volumes on which I want to set umask are of course, XFS. :(