Posts by bmhj2

    We need to keep running the command on subdirectories to find what is causing it. du -d1 -h /var/lib/

    If you have plex installed, that is probably what is filling it.

    I've got this far with reading over the contents of this thread, but I've got another culprit than Plex filling my system drive.

    Problems may have started following one of the two following system updates, though I'm not sure:

    Here is the output of the 'du' command.

    root@hypervault:/var/log# du -d1 /var/log/pcp/ -h
    12K     /var/log/pcp/pmproxy
    2.4M    /var/log/pcp/pmie
    25M     /var/log/pcp/sa
    52K     /var/log/pcp/pmcd
    2.5G    /var/log/pcp/pmlogger
    2.5G    /var/log/pcp/

    I can't find info elsewhere on the forums about PCP or pmlogger, so I'm not sure whether/how to fix/remove.

    Any advice?

    Pleased to say I fixed it, with some help from here, in case anyone else runs into the same problems as me. Man, that was confusing.

    It seems that whatever OMV was trying to call in from the internet didn't like the OpenDNS resolvers set in my router, but was happy to use the DNS resolvers from my ISP once I found them and replaced them in the router admin. Why that would work I have absolutely no idea, but the important thing is that it did.

    Thanks for clarifying re: other plugins.

    I did reinstall with that command, though my machine is an AMD x64.

    I also used the rm/var/... command as per previous instructions.

    As for the line in the output, that has got me stumped as well!

    I'm stuck for now but happy to see any other tips.

    You might want to look back from about page 2, there a number of steps to follow before reinstalling omv-extras

    Thanks - much appreciated but that hasn't solved it. I followed all the steps mentioned a second time, and managed to get omv-extras purged, but still couldn't re-enable the Docker repos. Same error codes as before.

    What I'm not sure about would be whether this would be affected by other active omv-extras plugins which, unlike the Docker GUI, were not removed. So would this fix only work if I also removed all the others (SnapRAID, UnionFS, etc.) before purging and then reinstalling OMV Extras?

    Thanks - tried that and no change - still the same error code when I try to enable the Docker CE repo, as follows:

    Thanks, @Nefertiti, I've tried that but it didn't fix the problem for me. So I'm still stuck with trying to enable the Docker CE repo, which still returns the error code quoted above.

    Is anyone able to help me work this out and get the repo working so I can reinstall Docker and get everything working again?


    Hi all, I've been having the same problem as timbatao, have managed to purge docker-ce and the openmediavault-docker-gui, so I've got as far as trying to follow instructions from @ryecoaaron here.

    However, at that point I can't enable the Docker CE repo, having reinstalled OMV-extras. What I get is an error with the following output:

    I haven't been able to figure out why this might be from elsewhere in the forums; the error code is not the same as the one quoted here, so far as I can tell.

    Can anyone help me get the repo enabled? I should then be able to do the rest myself.

    I've read in the script that it does a diff, and then a sync. I have also make the option to make a scrub after a sync (but the panel option "Scrub frequency" set to 7 days I have no idea what does)

    The diff checks whether there have been changes, and the the sync updates the parity based on those. By default, the sync does not go ahead if you have more than X number of files (by default, X = 50) deleted, which is a safeguard against the parity for files being removed if they have been deleted in error. In that situation, you have to sync the file manually from the CLI or the GUI.

    Scrubbing checks for errors by comparing the parity against the hashes computed when the 'sync' command was run, thus ensuring you against (for example) file system rot. By default it checks a small percentage each time and doesn't check the ones that have been done recently, to avoid duplicating effort. You can change the parameters for this; according to the snapraid manual (sections 4.1 and 5.7) 7 days means that it runs a scrub once a week, checking ~8% of files each time and not those checked in the last ten days. With these default settings all files will be checked about every three months. I can't advise on whether these are good settings but I've been happy to assume that they are.

    With the snapraid-diff schedule job will I be covered?


    And it does it once a day. At what time exactly?

    Mine runs at midnight.

    Is making a sync once a day really necessary if you do not change your files so often?

    No, but the purpose of the diff is to detect and sync only those files that have changed, are new, or have been deleted. If the diff detects no changes then no further sync is required, so it does no harm to run the script daily.

    I'm getting the following error message as part of the apt-get update output:

    Hit:26 stretch Release
    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fb83ea81510>
    Traceback (most recent call last):
      File "/usr/lib/python3.5/", line 117, in remove
    TypeError: 'NoneType' object is not callable
    Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fb83ea81510>
    Traceback (most recent call last):
      File "/usr/lib/python3.5/", line 117, in remove
    TypeError: 'NoneType' object is not callable
    Reading package lists... Done

    I'm not sure whether it's connected, but there is also the following message when I enter apt-get upgrade:

    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Calculating upgrade... Done
    The following packages have been kept back:
    0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

    I've posted in this thread because it's the closest one I can find to the issue I've had. However, I'm not sure whether to follow the steps suggested here because my system is an amd64, not a raspberry pi.

    Can anyone help me to understand what's going wrong here?

    Thanks to others for further discussion here.

    I think @bmhj2 has already manually edited the config file and had the same results/error. I read the instructions from the SnapRAID guide that you referenced, but wasn’t sure about editing the config file manually, so I searched the forum and came across this thread. My real problem is that I can’t get the first sync to finish. I’ve tried just about everything I can find or think of and still get the “Warning! The array is not fully synced. You have a sync in progress at 99%” when checking Snapraid status.

    Yes, that's right: I have tried editing the config file via CLI and via the plugin, with no success using either method.

    It seems as if @kmal808 have been encountering the same error, caused by files which changed somehow without that being picked up by the parity. In my case these errors have only related to two drives:

    • The SSD shown in my earlier post, which I wanted to remove because it contains Plex databases and Docker config data, and I realised (too late) that Snapraid is not the best way to back those up.
    • An HDD not mentioned before, which has developed problems since I last posted. This is almost certainly because I used Beets to re-tag the metadata on a large music collection, meaning that many of the files now show errors on Snapraid, presumably because of some snafus created by that Beets import taking place while I was also messing around with the "snapraid sync -F" command trying to remove the SSD from the array (I have also, as indicated in my original post, since added a second HDD parity drive).

    Stating the problem very simply, having first misunderstood how Snapraid works and created a setup which was likely to have problems, I have then compounded my error by making lots of file changes in quick succession so that I am left with parity files which are incorrect. As a consequence I can't run a successful sync, because Snapraid is returning false positives - suggesting there are data errors where I am confident there aren't.

    For (1) I have managed to stop the error messages by adding exclusions for the Docker and Plex folders on the SSD as described by @kmal808 above. If I understand things correctly, this has removed files on the SSD from the snapraid.parity and the snapraid.content files on a subsequent sync, and this seems to have acted as the bandaid that @kmal808 describes above. The outstanding question is still this: how to remove the drive from the array completely?

    For (2) I have not yet done anything, but I am wondering whether the following will work:

    • add temporary exclusions for the HDD to the config
    • run a sync (on the assumption that this will then remove the faulty parity data)
    • remove the exclusions
    • run another sync (on the assumption that this will force a recalculation of the parity)


    it does strike me that another less fiddly solution would be (as @kmal808 seems to suggest, and @gderf advises against) to just nuke the array and start from scratch. @kmal808 suggests uninstalling the plugin, but doesn't know how this would affect the snapraid.parity and snapraid.content files. I assume it would be possible to exercise this option by simply deleting all the parity and content files and running a new sync, but I have already proven myself to be dumber than I thought here so would prefer to . I'd be willing to take the risk with the data, since if necessary I can restore from a backup - but I suppose I would ask the following questions:

    • Can I remove the SSD from the array and force a complete rebuild of the HDD parity data without exercising my nuclear option?
    • How complicated will this be?
    • If I exercise the nuclear option, what is the simplest means of doing so?

    I should finally add: forgive me, fellow forum users, for I have sinned! Hope you don't mind helping me dig myself out of this hole.

    Did that process work for you? I'm getting the same error, and I follwed the process listed here. Deleted drive from OMV gui, went to terminal and ran command snapraid -E sync, and got this error:

    Disk 'd2' with uuid 'abee6d94-71ad-42ec-ab28-bc921527b6a8' not present in the configuration file!
    If you have removed it from the configuration file, please restore it How did you resolve your error? Thanks.

    Hello kmal. No, that didn't do the job for me, I got the same error as you quote here (different disk ID, obvs) and as a result I have not yet been able to remove the SSD from my Snapraid array. I haven't followed up or tried anything else yet, as my priorities were elsewhere, but I'm glad you reminded me: if @jollyrogr or anyone else can point me in the right direction here I would be grateful.


    Thanks. From my reading of the manual, -e only applies to the 'fix' command.

    Should I use -E (upper case), or am I misunderstanding the manual?

    Sorry - just want to be clear before setting this running.


    OK, thank you @jollyrogr.

    So the correct procedure would be:

    • Delete the SSD from the drive list within the plugin
    • Then go to a terminal window and execute the command:
    snapraid sync -F

    Do I have that right? Or would it be the same command with the modifiers -FV?


    I have the following array set up in SnapRAID:

    1 x disk of 2TB (parity)
    5 x HDDs of varied sizes (data)
    1 x SSD (data)

    I created this before I'd really got to grips with SnapRAID and it needs a bit of tweaking now I've had some more time to think things through. I want to remove the SSD from the array, because 1) the files on it change too often for SnapRAID to be useful and 2) they are covered by another form of backup. I also want to add a new 2TB parity drive.

    I'm having difficulty getting SnapRAID to accept me removing the drive from the array. If I remove it via the plugin (see pic 1 - the SSD is the last drive in the list) and then try to run the sync command, I get an error message stating that the drive has been removed from the config, and asking me to restore it (see pic 2). If I try to remove it by manually editing the snapraid.conf file via SSH and then forcing a sync, as per the instructions here, then I succeed in altering the snapraid.conf file as shown in my SSH window (pic 3), but it does not update the config as shown in the plugin (pic 4), and therefore I'm stuck with the same problem.

    What am I doing wrong here? What's the best way to permanently remove the SSD from the array? (I'll leave adding a new parity disk as I think that I know how to do that!


    I have had an identical error message as in this thread, while attempting to use clonezilla to backup my system disk - only with a different 'group number' quoted compared to that for the OP in this pic. Have posted this message on that thread - and so apologies for this cross-post - but I haven't heard from anyone and I was wondering if anyone in this topic has any ideas as to what the problem might be.

    Though the system itself still boots from the SSD, and the SSD appears to be working fine, I'm not getting any joy with backing up the system. This is OK for the time being but I'd prefer to be able to do so.

    The Clonezilla output is as follows:

    Since the last successful Clonezilla backup I used Gparted to resize the two partitions on the SSD - this being the only thing I can think of which might have caused the problem. I'm not sure if it is recoverable.

    I have tried the OP's solution on that original thread, which worked for the OP, but it didn't work for me. Based on cabrio's advice there I have also tried to run clonezilla multiple times, both over SSH and directly with keyboard and monitor plugged into the box. Same error. Also the same error when running clonezilla from a live CD as when running it using the version installed by OMV-extras. This makes me think that the problem is something to do with my SSD, and possibly a result of the partition resize using GParted?

    I've booted into SYstemREscueCD and used fsck and e2fsck to check the SSD. It says that there is a bad superblock, but at this point I lost confidence in my ability to Google and find an answer - the pages I could find went far beyond my level of understanding, and I don't like to mess with my system drive when I don't have a recent backup to go back to.

    Currently then I have a server which works OK but I cannot back up. Any ideas how I might fix this? I've gone beyond the level of knowledge where I feel comfortable to fix this myself using Google...

    Thanks for any help anyone can offer!