Where is "Disks" information stored and how to modify manually from shell when the web admin will not allow deletion?

  • I'm trying to do some system maintenance before upgrading to OMV 6 from 5 and I'm seeing a problem where it is listing a storage device that's no longer present:

    4d88a5ad-6649-426d-8931-4fd62ad9d0b4.jpg?v=7227f2db47529874f3cbb974658cfe3e


    When I select it no options to perform functions are available. No Mount, no Unmount, no Delete button functionality are active - all disabled.


    How would I go about removing this in some manual manner from the shell?


    In addition, I've tried twice to add a USB external drive that it does detect and I can seemingly format to XFS, but it simply never shows up here in this File Systems list. So I'm thinking maybe I need to check the manual options out to see if there are some problems going on there and if fixing the ZFS drive showing up when it's not supposed to it might fix the other issue too.

  • geaves

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    When I select it no options to perform functions are available. No Mount, no Unmount, no Delete button functionality are active - all disabled.

    That's because it's referenced -> SMB or NFS -> Shared Folders

  • Thanks geaves I had the same thought but couldn't find anything therein that lines up with this. There are shares, but none of them are missing or would be what this could be as they're all mapped to an existing share. Is there a way to map this more directly in some other way to find out where the problem may be?

    • Offizieller Beitrag

    Is there a way to map this more directly in some other way to find out where the problem may be?

    :/ try installing the resetperms plugin, it will appear under services one of it's functions is to display Shared Folders in Use


    But going back, looking at the image you want to delete the line FileSystem -> zfs, you can't unless you locate where that is referenced i.e. in use, that implies an smb or nfs and shared folder


    When you want to remove/delete a file system using the GUI you have to do it in reverse

  • Hmmm, this just isn't making any sense. The only ZFS volumes I have on the system are the 6 drives formatted with ZFS that are part of the mergerfs filesystem named "tank" so in "Shared Folder In Use" as you asked me to try geaves shows:

    dba3ffbb-7515-4b52-a369-709b1c18b108.png?v=9445d32a82783a32df6a3b41e80a9ac8


    And all shares are on the "tank" tank and nothing else.


    It seems that I necessarily need an additional method to track this filesystem down that's showing as "missing" since there should be nothing missing it would seem.

    • Offizieller Beitrag

    The only ZFS volumes I have on the system are the 6 drives formatted with ZFS that are part of the mergerfs filesystem named "tank"

    ?( that comment alone is confusing, zfs if just that, using the zfs plugin you create a pool that in turn then creates the file system on that pool


    mergerfs is a different plugin altogether

  • In this case I used ZFS as the base filesystem - I did not create a singular RAIDZ pool, but instead 6 individual ZFS drives/volumes that I then incorporated into a MergerFS + SnapRAID arrangement instead of using EXT4 or XFS, etc. I did this because, as noted elsewhere, there are some additional advantages from ZFS of compression.

    • Offizieller Beitrag

    I did not create a singular RAIDZ pool, but instead 6 individual ZFS drives/volumes

    ?( you have to create a pool be it basic, mirror (even with one drive) or otherwise to create the volume, because that is what I believe is missing in your first image under Device(s)

  • Each drive is/has its own zpool, yes. and was created via

    Code
    sgdisk --zap-all /dev/disk/by-id/ata-MFG_MODEL-NUM_SERIAL
    sgdisk --new=1:1M:-8M --typecode=1:BF01 /dev/disk/by-id/ata-MFG_MODEL-NUM_SERIAL
    zpool create -f -d -m /mnt/tank_drives/bay_1 -o ashift=12 -o feature@zstd_compress=enabled -O casesensitivity=insensitive -O atime=off -O normalization=formD bay_1-MFG_MODEL-NUM_SERIAL /dev/disk/by-id/ata-MFG_MODEL-NUM_SERIAL-part
    # etc...

    These are all part of the tank in Storage → Union Filesystems:

    5b7b3de8-1e55-4cf7-8951-ef95b015695d.jpg?v=202ad2e524476a21c64193c6546670cb


    Then of course it seems we've circled back to the initial illustration under Storage → File Systems:

    c450c384-6335-4e3d-bbed-620d3fba6cdf.jpg?v=a39db9161599bad3cb11112525d76fdd

    • Offizieller Beitrag

    Each drive is/has its own zpool, yes. and was created via

    :thumbup:

    Then of course it seems we've circled back to the initial illustration under Storage → File Systems:

    Now at last we have the whole picture and not just part of it, whilst non of the pools have been imported into OMV, OMV has detected a zfs file system, as this is not an option in OMV without zfs being deployed.


    Can you delete that line in the OMV GUI -> no, can you delete that line from the shell/cli -> no, not directly you could look in /etc/openmediavault/config.xml and search for mntent and locate that line in OMV's database.


    Word of warning backup the file first before editing, at least if it goes wrong you have something to restore

  • Thank you very much. This is very interesting in terms of being referenced to the config file at /etc/openmediavault/config.xml. Thank you... It seems as if most entries (likely all) are sane there... Am I correct in that I'm supposed to be looking at the <fstab>...</fstab> dictionary entry section here and no other areas? I am guessing that the "Storage → File Systems" corresponds to the fstab section since there are many "mntent" keys throughout the entire file...


    If I'm wrong, please point me to the correct section and I'll dig in deeper there since I'm baffled so far in that I'm not seeing a bad actor here yet.

    • Offizieller Beitrag

    I am guessing that the "Storage → File Systems" corresponds to the fstab section since there are many "mntent" keys throughout the entire file...

    Yes, but fstab the file does not show/display the file system of a zfs file system

    If I'm wrong, please point me to the correct section

    TBH I don't think it would be anywhere else within that xml file, but you could just try searching for zfs rather than mntent


    I use zfs, if it would help I could post my own fstab and a snapshot of mntent from my own xml file

  • only a note: are you in time to wipe your data disk and start it again?.



    This time please create a ZFS pool not that "strange thing" that you create.

  • geaves you certainly are welcome to share yours, thanks. I think the real issue here is that I'm not using any of the ZFS zpools directly since they are all mapped to the MergerFS primary tank volume and therefore would not and do not need to actually be mapped within the fstab. This is how it worked without a problem over the past nearly two years. I just am not sure when this odd issue arose with it showing as it does within the web admin UI.


    No raulfg3, this cannot be recreated as it is too large and has been in use as a critical infrastructural piece now for a long time. There are really no actual functional problems per se. The only thing that needs to be done is to A) fix this weird issue that doesn't seem to be actually causing a problem, but is annoying/worrisome and B) add an additional parity drive to the system such that SnapRAID has two drives vs one to secure the data. The third thing - and what I was thinking about doing first - is upgrading the system to OMV 6 to see if that would resolve or improve any of these and simply make sure I'm running on an up-to-date system that might help in any way.

    • Offizieller Beitrag

    Is the only issue here the missing filesystem in the Filesystems tab? That would be a zfs plugin issue if the missing filesystem was a zfs object. Definitely move to OMV 6 since I'm not going to look at or possibly fix the OMV 5 plugin.

    • Offizieller Beitrag

    I think the real issue here is that I'm not using any of the ZFS zpools directly since they are all mapped to the MergerFS primary tank volume

    It might have something to do with OMV 'detecting' something in relation to the zfs file system, as your set up is somewhat unique to say the least :) I see ryecoaaron has replied :thumbup:


    is upgrading the system to OMV 6 to see if that would resolve or improve any of these and simply make sure I'm running on an up-to-date system that might help in any way

    if you did the omv-release-upgrade which is the norm it could go pear shaped (tits up in other words) unless you've got a back up

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!