Snapraid sync took a long time and now phantom storage is taking up my disks

  • I'm still working through this, but my SnapRAID scheduled task didn’t run as expected after upgrading to OMV7. I ran the script manually, and it took a long time—almost like it was creating the entire array again, and apparently, it did. Looking at other forum posts, it seems we now need to set a default array, which I hadn’t done. That’s fine; I can rerun the sync after selecting it. However, my issue is that the data disks now seem way more filled than they were before the sync. They used to be around 75% capacity but are now over 90%, and the only thing I’ve done is run the sync. My mergerfs pool had almost 8TB free, and now it’s down to half that. I bring that up because it’s the number I usually pay attention to, and the green bars under storage in the OMV dashboard (which are now red for most drives). I haven’t been able to figure out what’s taking the extra storage or what could’ve caused it. How can I go about finding that out? Right now I'm running ncdu on the disks as root to see what I find but haven’t found anything yet.

    • Official Post

    Looking at other forum posts, it seems we now need to set a default array, which I hadn’t done

    It is optional and only needed if you use old scripts that don't specify the config file location.


    However, my issue is that the data disks now seem way more filled than they were before the sync. They used to be around 75% capacity but are now over 90%, and the only thing I’ve done is run the sync

    Other than the content file, snapraid does not put anything on the data disks. Did you look to see how big the content files are?


    It's the file used by SnapRAID to save the list of all files present in your array with all the checksum, timestamp and any other information needed.

    These file will be of about few GiB depending of how big is your array. Approximatively for 10 TB of data, you'll need 500 MiB of content file.

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Other than the content file, snapraid does not put anything on the data disks. Did you look to see how big the content files are?

    I know, maybe it's just OMV misreporting the available space? ncdu showed the usage I expected on each disk and they all show they're less filled up than what OMV is reporting and yet at the bottom of the ncdu screen it still shows that they're completely used up?



    Disks are all 12tb ext4 (luks encrypted), `sudo ncdu` was run at the root of each (/srv/uuid/ etc). Something's off :/

    • Official Post

    maybe it's just OMV misreporting the available space?

    I don't know why it would. OMV uses standard commands - df Lots of people get confused by TB and TiB reporting.

    ncdu showed the usage I expected on each disk and they all show they're less filled up than what OMV is reporting and yet at the bottom of the ncdu screen it still shows that they're completely used up?

    never used ncdu. I wouldn't even use du for reporting total usage on a disk because it can miss too many things. It is good for detecting size of a directory. df -h has never let me down.

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • never used ncdu. I wouldn't even use du for reporting total usage on a disk because it can miss too many things. It is good for detecting size of a directory. df -h has never let me down.

    Indeed, and that's exactly what's happening. df is reporting more but I can't account for what's taking that extra space (and why right after upgrading to omv7). ncdu allows me to get the size for the files that I actually expect to be taking up space, but there's "shadow" files taking up phantom space now. I'm trying to find out why and, more importantly, where are those so that I can remove them.

    • Official Post

    ncdu allows me to get the size for the files that I actually expect to be taking up space, but there's "shadow" files taking up phantom space now. I'm trying to find out why and, more importantly, where are those so that I can remove them.

    du usually misses dot directories and files in the root of the filesystem.

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I think I found the issue. A volume pointing to the root of a mergerfs pool in a Deluge Docker Compose definition ended up being backed up by OMV in a kind of infinite loop (the pool holds a reference to the backup for the containers). It doesn’t have the #SKIP_BACKUP tag at the end, but the pool is terabytes large, so it’s not supposed to be backed up. OMV6 never backed it up, but OMV7 tried to do it regardless. Something must have changed in the way OMV handles backups. Other containers pointing to folders underneath (like ${DATA}/audio) did not get backed up despite also not having the #SKIP_BACKUP tag at the end of their volume definitions. Deleting the backup for Deluge freed up the space I was missing :)


    Code
    ---
    services:
      deluge:
        image: lscr.io/linuxserver/deluge:2.1.1
        container_name: deluge
    ...
        volumes:
          - /mnt/docker/apps/deluge/config:/config
          - ${DATA}:/downloads 
    • Official Post

    It doesn’t have the #SKIP_BACKUP tag at the end, but the pool is terabytes large, so it’s not supposed to be backed up.

    The plugin's backup uses du to calculate the space.


    OMV6 never backed it up

    Something else changed then because they do the same check.

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Official Post

    Do you have the logs from the backup run where it backed up too much?


    OMV 6.x code used kb instead of bytes but they do the same check. I use the backup myself and a couple of containers that are skipped because of the directory size.



    omv 6.x version

    convert database setting of GB to kb - https://github.com/OpenMediaVa…in/omv-compose-backup#L95

    du check using kb - https://github.com/OpenMediaVa…n/omv-compose-backup#L176


    omv 7.x version

    convert database setting of GB to bytes - https://github.com/OpenMediaVa…n/omv-compose-backup#L112

    du check using bytes - https://github.com/OpenMediaVa…n/omv-compose-backup#L221

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I just went to check the docker backup settings because something similar happened to my syncthing container (might not have noticed before because deluge's was TBs huge, while this one just under a TB). It seems that OMV set the max size setting to 0 during the upgrade process, might have to do with that change you made. Anyways, that explains why it was happening.

    • Official Post

    It seems that OMV set the max size setting to 0 during the upgrade process, might have to do with that change you made

    Nope. Default has always been and still is 1 GB. The upgrade would not change it. The change I made only changed the unit the script used. The database and gui stayed exactly the same.


    omv 6

    openmediavault-compose/usr/share/openmediavault/datamodels/conf.service.compose.json at 6.x · OpenMediaVault-Plugin-Developers/openmediavault-compose
    openmediavault plugin for docker-compose. Contribute to OpenMediaVault-Plugin-Developers/openmediavault-compose development by creating an account on GitHub.
    github.com

    openmediavault-compose/usr/share/openmediavault/workbench/component.d/omv-services-compose-settings-form-page.yaml at 6.x · OpenMediaVault-Plugin-Developers/openmediavault-compose
    openmediavault plugin for docker-compose. Contribute to OpenMediaVault-Plugin-Developers/openmediavault-compose development by creating an account on GitHub.
    github.com


    omv 7

    openmediavault-compose/usr/share/openmediavault/datamodels/conf.service.compose.json at main · OpenMediaVault-Plugin-Developers/openmediavault-compose
    openmediavault plugin for docker-compose. Contribute to OpenMediaVault-Plugin-Developers/openmediavault-compose development by creating an account on GitHub.
    github.com

    openmediavault-compose/usr/share/openmediavault/workbench/component.d/omv-services-compose-settings-form-page.yaml at 6.x · OpenMediaVault-Plugin-Developers/openmediavault-compose
    openmediavault plugin for docker-compose. Contribute to OpenMediaVault-Plugin-Developers/openmediavault-compose development by creating an account on GitHub.
    github.com

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Nope. Default has always been and still is 1 GB. The upgrade would not change it. The change I made only changed the unit the script used. The database and gui stayed exactly the same.

    I don't know what to tell you other than I did not change it myself. Haven't touched those settings in a while and out of all the settings in the docker page that one I do understand what it does. However I did play with restoring a backup from fsa before upgrading to omv7, I guess it's possible that the fsa restoration process messed that up.

    • Official Post

    I guess it's possible that the fsa restoration process messed that up.

    fsarchiver would never change one value in an xml file.

    I don't know what to tell you other than I did not change it myself.

    I make damn sure that plugins don't change values that a user sets. There is no difference to the plugin when updating from 6 to 7 or just a minor update during the 6 or 7 cycle. If someone can find a bug in the code that causes this, I will fix it. Otherwise, I got nothing.

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!