After 5 to 6.8 upgrade File Systems, Software RAID and Shared Folders do not display anything in the GUI. .

  • I have just upgraded from OMV5 to OMV6 using an omv-release-upgrade.

    After the upgrade, all my file systems and shares appear to be there. All the disks show up in Storage: Disks and all show SMART status in Storage: SMART. However when I look at Storage: Software RAID, File Systems and Shared Folders, nothing is displayed. I get an RPC error about the amount of disk space used when I try to access the Shared Folders or Software RAID, but not File Systems.

    Code
    Sep  5 09:45:27 omvhome monit[2534]: 'filesystem_srv_dev-disk-by-label-parityTVPOOL' space usage 89.0% matches resource limit [space usage > 85.0%]
    Sep  5 09:45:27 omvhome monit[2534]: 'filesystem_srv_dev-disk-by-label-backup1' space usage 85.4% matches resource limit [space usage > 85.0%]

    I did also try some API commands I found in the Docs:

    I have two raids that should show up, the on box raid, two disks and the offbox Elite Pro Dual hardware raid which is an external enclosure that normally shows up as disks sdi and sdj and the other omv software raid md127 which does not show up at all anymore.

    I also run a 4 disk (3 data, 1 parity) Snapraid + Union FS.

    Code
    8:64    /dev/sde    8:65    /dev/sde1    TVPOOL1x4TB
    8:80    /dev/sdf    8:81    /dev/sdf1    TVPOOL3x8TB
    8:48    /dev/sdd    8:49    /dev/sdd1    TVPOOL2x4TB
    8:112    /dev/sdh    8:113    /dev/sdh1    parity


    Looking around I can confirm php-bcmath is there:


    It seems like a UI issue, but I am guessing.

    Thanks,

    Ian

  • ian6

    Hat das Label OMV 6.x hinzugefügt.
  • ian6

    Hat das Label Upgrade 5.x -> 6.x hinzugefügt.
  • ian6

    Hat den Titel des Themas von „After 5 to 6.8 upgrade everything is working, but File Systems, Software RAID and Shared Folders do not display anything in the GUI, but all are there.“ zu „After 5 to 6.8 upgrade everything is working, but File Systems, Software RAID and Shared Folders do not display anything in the GUI, but all are there in Services.“ geändert.
  • Okay, so apparently this is a question no one can answer.

    Let me try another.

    As the GUI seems to be working okay but just missing all the storage data, is there some way to regenerate the configuration, or in some way register my storage with the OMV GUI? I'd obviously prefer to do this without disturbing the working backend functionality, and of course I do not want to lose all my data.

    I would need to:

    register my existing software raid, originally created with OMV.

    register my Snapraid, also created with OMV. The Snapraid does show up correctly in services.

    register my external backup hardware raid drive that consists of 2 drives, but only one is showing up in OMV.


    Thanks,

    Ian

  • ian6

    Hat den Titel des Themas von „After 5 to 6.8 upgrade everything is working, but File Systems, Software RAID and Shared Folders do not display anything in the GUI, but all are there in Services.“ zu „After 5 to 6.8 upgrade File Systems, Software RAID and Shared Folders do not display anything in the GUI. .“ geändert.
  • I am a little underwhelmed by the help I have received. But I have continued to try and figure this out.

    I rolled back to a system backup from before the upgrade, back to OMV 5. That was fine as before, except, omv-firstaid did not work. I got a traceback. Previously someone had seen this when their disk was full, but mine was not. Anyway moving on, after the upgrade omv-firstaid is back.


    My current set of symptoms:

    omv-firstaid is now working after the OMV 6 update. It did not work the last time I updated.

    I did an RRD check and I get lots of errors similar to this

    Code
    The RRD file '../rrdcached/operations-receive-update.rrd'     │
    │ contains timestamps in future.                                │
    │ Do you want to delete it?

    In the GUI several Storage related displays fail:

    Storage: Software RAID, File Systems and Shared Folders, nothing is displayed

    Also in the Performance Statistics, Disk I/O and Disk Usage do not display.


    You can see the trend here. Most of the shared Disk related items in the Storage and Performance Statistics do not display, but under Services the NFS and CIFS shares are fine.


    I also get a lot of RPC 500 errors about two of the disks that are over quota, but not full.


    Any ideas please.


    Ian

    • Offizieller Beitrag

    Storage: Software RAID, File Systems and Shared Folders, nothing is displayed

    Clear your browser cache or force reload the page (Ctrl+Shif+R)


    contains timestamps in future. │ │ Do you want to delete it?

    Check the time on your server. If correct, delete the database.



    Anyway moving on,

    Did you follow these steps?


    • Offizieller Beitrag

    I get an RPC error about the amount of disk space used when I try to access the Shared Folders or Software RAID, but not File Systems.

    Regarding this, in the GUI you can customize the maximum disk usage. In Storage>File System Select the file system and press the Edit button. A field should appear to define the Usage Warning Threshold. At least it appears on a simple EXT4 disk, I don't know if it appears on a Raid mdadm.


    Looking around I can confirm php-bcmath is there:

    Here I see a problem coming from your update from OMV5. It would be logical to think that this is affecting your file system. You have two non-OMV6 plugins installed. omvextras-unionbackend and openmediavault-unionfilesystems. You should have uninstalled this before updating to OMV6 and I don't remember if you should have done anything else before updating. There is a guide on the forum that explains it and a thread about moving plugins from OMV5 to OMV6. I would try to uninstall them and install the plugins you need, probably mergerfs and I don't know if something else.


    As the GUI seems to be working okay but just missing all the storage data, is there some way to regenerate the configuration, or in some way register my storage with the OMV GUI? I'd obviously prefer to do this without disturbing the working backend functionality, and of course I do not want to lose all my data.

    This is probably a consequence of the above and will be solved by installing the appropriate OMV6 plugins. Almost everything you say in the following post is related too.

  • Thanks for the posts. They were tremendously helpful.

    I did the purge to get rid of the unionfs plugin, and that helped a lot. One command removed both.

    Code
    apt-get --purge remove omvextras-unionbackend

    The file systems started showing up in the GUI, but other problems remained.


    I also discovered that at this point the mergerfs plugin is required, or else I cannot see the storage in the GUI, so I installed that through the plugins menu after reinstalling omv-extras. I deleted omv-extras to get rid of all the docker stuff which I no longer use, but I failed to realise how many other plugins were using omv-extras. Anyway it is back now and things are better, but still not 100%.


    I started getting a traceback when trying to set the mergerfs up:

    Code
    Failed to execute XPath query '/config/services/mergerfs/pools/pool[uuid='d5e2b9ff-1a43-4d30-be1f-5f27551507ed']'.
    
    OMV\Config\DatabaseException: Failed to execute XPath query '/config/services/mergerfs/pools/pool[uuid='d5e2b9ff-1a43-4d30-be1f-5f27551507ed']'. in /usr/share/php/openmediavault/config/database.inc:88
    Stack trace:\
    ...


    I found another thread covering this and it mentioned bad <mntent> in /etc/openmediavault/config.xml

    going through those I saw one referencing the unionfs share, so I commented that out. The share still seems to be there and working but it apparently makes the GUI unhappy.


    I now feel I am in some in between world where the snapraid + unionfs share is gone but not gone. I do not understand what is going on. Still in the meantime the GUI is back working again so I am happy.


    I plan to get rid of the snapraid + mergerfs setup and just take two of those disks and use them as a regular raid 1. My storage needs have diminished, so a raid 1 8TB can replace the 16TB snapraid + mergerfs. I need to copy the data from the snapraid to the new raid, which I hope will be simple enough. Then I can get rid of the snapraid and mergerfs, leaving things simpler.


    With hindsight I regret the cheaper but more complex snapraid and mergerfs setup. In the end it seems overly complicated for my current needs.


    Thanks again for coming to my rescue. I may have more questions as I unwind the current setup. Still it is now much better than it was.


    Thanks,

    Ian

    • Offizieller Beitrag

    so I commented that out

    You can't do that in an xml format file. If you do that it breaks.

    config.xml is the OMV database, in that file is all the system configuration information. That file is essential for OMV. If that file is broken OMV doesn't work. If you make any manual modification, make a backup of that file before.

    I plan to get rid of the snapraid + mergerfs setup and just take two of those disks and use them as a regular raid 1.

    I have the same opinion. Snapraid is fine on paper but requires a thorough understanding of its operation and active maintenance. I use ZFS for parity (and I ALWAYS have a backup on top of that, don't forget that part).


    Regarding mergerfs, you will probably have to redo the pool if you want to continue using it. Search the OMV5 to OMV6 update threads. I seem to remember that something changed in the format.


    I would say that the problem here is that you did not strictly follow the instructions for the update.


    If it continues to give you problems, it may be easier to do a fresh installation of OMV6 and simply mount the existing data systems. It is the shortest path. You've been at this for too long.

  • Just to wrap this up, everything is now back and working correctly. I made several changes to the system along the way.

    I now have 6.8.0-1 (Shaitan). I now use the Proxmox kernel so I can have ZFS.

    I ditched my snapraid + unionfs, and replaced them with a couple of ZFS raids, 3x4TB Z1 raid for the things I really care about, and 2x8TB mirror raid for the things I care less about. I also have a single 6TB disk for PC backups, and an external 4TB raid1 for backing up the important stuff and a 250GB SSD boot disk. (How many times can you fill a boot disk with runaway backups while you external backup is disconnected. So many.) I retired a couple of older 2 TB disks that had started to look less SMART.

    All my shares are back and everything is back to normal.


    With hindsight I jumped into the 5.x to 6.x upgrade in a cavalier way, and caused myself some grief. The disappearance of UnionFS really messed me up. If I had known to switch to mergerfs before the upgrade, it would have gone more smoothly. (And yes I saw the instruction to check your plugins for OMV 6.0 compatibility, but seeing and doing are not the same.)

    Still Snapraid maintenance has always been a bit of a mystery to me, so I think I will be happier with my ZFS raid.

    Also rsync is your friend, moving files here and there, just getting it done.


    Thanks to those who helped me, and for the other existing forum posts that guided me along the way.

    Ian

  • ian6

    Hat das Label gelöst hinzugefügt.
    • Offizieller Beitrag

    I'm glad you solved it.

    An issue that has caught my attention:

    3x4TB Z1 raid for the things I really care about, and 2x8TB mirror raid for the things I care less about

    Both pools have the same capacity in this case, 8TB. However, in the Raidz1 you have three chances of a disk failing and in the mirror you have two chances of a disk failing. If a second disk fails the result will be the same in both cases, data loss.

    So the logical thing could be to reverse the use. The data that matters most to you in the mirror and the data that matters least to you in the Raidz1.

  • You bring up an interesting point. Your comment led me to examine the ZFS raid levels more carefully. With hindsight I think I might be better off with a 3 disk ZFS mirror for my most valuable files. I do not need the extra space from the Z2 raid. I would still be doubling my space going from a 2TB mirror to a 4TB triple mirror, and I like the added redundancy. Of course I would have to rebuild the 4TB raid so I think I will take a day or two to consider this.


    Thanks for the feedback,

    Ian

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!