Posts by mneese

    Oops, i think this is my OS disk...240gb

    df -hx --max-depth=1 /

    Filesystem Size Used Avail Use% Mounted on

    /dev/sda1 213G 212G 0 100% /

    What to do with this issue?

    Shared folders are intact and can be seen from other drives.

    I am using omv4, snapraid, unionfs with ext4...

    Been having trouble logging in - not accepting passwords, but just relooping to loggin without erros. Logged in ssh as root and tried omv-first to change password. This failed, saying there was no disk space. The OS disk is ssd 120gb, so doesn't seem that this would be full.

    From omv-firstaid ran apt clean, now login gives me this:

    Error #0:

    OMV\Exception: Failed to read file '/var/cache/openmediavault/cache.omv\controlpanel\login_js.json' (size=0). in /usr/share/php/openmediavault/json/

    I can see files from other computers. I have re-booted several times from the ssh interface, but the error does not change...

    Any ideas?

    root@nwvault:~# df -i

    Filesystem Inodes IUsed IFree IUse% Mounted on

    udev 2046509 497 2046012 1% /dev

    tmpfs 2051518 777 2050741 1% /run

    /dev/sda1 14221312 133126 14088186 1% /

    tmpfs 2051518 1 2051517 1% /dev/shm

    tmpfs 2051518 3 2051515 1% /run/lock

    tmpfs 2051518 16 2051502 1% /sys/fs/cgroup

    tmpfs 2051518 11 2051507 1% /tmp

    label-Archives_Four:label-Archives_One:id-ata-TOSHIBA_HDWD130_X7V0PPGAS:label-Ar chive_Five 732594176 345206 732248970 1% /srv/0808f8e1-f56e-4184-b872-19fd729 aec5d

    /dev/sdc1 183148544 14 183148530 1% /srv/dev-disk-by-label-Archives_Two

    /dev/sdf 183148544 14 183148530 1% /srv/dev-disk-by-label-Archives_Thre e

    /dev/sdd1 183148544 31798 183116746 1% /sharedfolders/snapfour

    /dev/sde 183148544 5680 183142864 1% /sharedfolders/snaptwo

    /dev/sdb1 183148544 271065 182877479 1% /sharedfolders/snapone

    /dev/sdg 183148544 36663 183111881 1% /srv/dev-disk-by-label-Archive_Five

    does this mean anything?

    I followed "crashtest" instructions and they were right on...things seem to be in order, files are accessible on my network, midnight commander and cloud commander.. All is well...thnks every one for your very quick and "right on" support:

    Once the UnionFS drive (mount point) is created, go into edit mode on one of your old shared folders, and hit the drop down arrow in the Device line. Select the UnionFS mount point and you should be good to go.

    Your top level SMB network shares will simply follow your redirected shared folders. Test it before moving on to the next one.
    mneese likes this.

    thanks for the reply.
    I have named the unionfs shares the same name as the original, hoping that it would populate with the same files. Not so...can you give me some config tips to make this happen without moving all those files? I am native win 10 user, so be nice...

    I have a question regarding unionfs and snapraid. I had an existing snapraid on my OMV, with the array of drives each a shared folder. I then decided to work with unionfs and have one volume to read and write to. I installed unionfs over the existing drives, indicating properly the data and parity drives. I designated the new shares and configured smb. The new shares are shown empty through my "cloud commander" interface. However the network sees the old shares as intact, as they were before the unionfs installation. They still sit in their old directories. So, should l use Midnight Commander, and move the old shared directories/files to the new unionfs shares? Where do the files end up? Are they physically repositioned to different drives, using the "existing path most free space" protocol, or do they stay where they were. I have not actually re-populated the new shares, so if there is a better way to do this, please advise...thnks for your help...

    thnks for your advice....i had misconfigured the shares and smb, so that was reason for errors and inability to connect from windows computers....its all good now

    thank you gderf for your help...i have successfully added data to the empty drive, added "content" to the parity drives, synched, and all appears ok...however when i change the user privileges on the drive with the new data, i am getting the same error message. The only way i can make this error message disappear is by "revert" for the privileges....

    So, at this point there is data on the drive, I can see the data/directories using "cloud commander" in a browser, but i cannot configure users so I can see the drive from my windows computers....

    Seems that changes to my users privileges triggers this error message...

    Possibly, I may have to change the privileges via CLI? That may change the privileges, but i have not nailed the OMV interface problem...Any other suggestions? Thnks in advance!

    I am working on a second omv raid box to determine if snapraid is preferable to my working omv which is a simple raid 5 with a usb bkup drive for safety.

    The snapraid is configured with 4x3tb drives, and 2 parity drives...the data drives all have "content" and "Data" while the parity drives are parity only.
    When making configuration changes in snap raid interface, I get this error continiuously...
    "Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl restart 'sharedfolders-snapthree.mount' 2>&1' with exit code '1': Assertion failed on job for sharedfolders-snapthree.mount."

    In the file system interface, all of the drives are "mounted", "referenced", and online"...

    Synch has been ran several times in the last couple weeks, as well as scrub weekly, and the "check" was run this morning, after the error would not go away. The specific configuration change which has produced this error was an "ACL" change on another drive (snapfour drive), to allow user access to the drive and file system from other computers..,..

    The "snapthree" referred to is a shared drive that has no data as of yet...

    Any idea what I can do to either correct the issue, or define the issue??


    Thks gderf, the steps seems to have worked, and the rebuilding has been started...will update progress for others to see....

    Finished with parity went flawless and "everything is Okay"

    This array is snapraid....
    I have 1 parity drive and 5 data drives in new install...i then deleted a data drive, added that drive back into the array as a parity drives and I get this message with "fix, check, sync"...

    Self test...
    Loading state from /srv/dev-disk-by-id-ata-TOSHIBA_HDWD130_X7V0PPGAS/snapraid.content...
    Decoding error in '/srv/dev-disk-by-id-ata-TOSHIBA_HDWD130_X7V0PPGAS/snapraid.content' at offset 94
    The file CRC is correct!
    Disk 'raidtwo' with uuid '9063dc29-32e2-4ea2-b5e1-8c3fdadc88f9' not present in the configuration file!
    If you have removed it from the configuration file, please restore it

    How can I configure to have that drive as parity drive? Would a re-boot make this work?

    • Does the usb backup external need to be unplugged when backup is finished, then re-connected manually for refresh?
    • Does the backup only happen when external drive is re-connected, or is it a dynamic process that happens whenever changes are made to files?
    • Should external drive be permanently connected?
    • How to verify backups are even there?
    • are backups simply a mirror with files and trees defined, or is it a compressed backup.
    • when restoring, is it a simple copy of the files to the omv raid?