Posts by mneese

    I did have a question about your 6 drives setup using snapraid. So if I understood correctly from another conversation, snapraid will only permanently persist changes when it is executed otherwise you can revert changes? Or perhaps I didn't get it at all? Could you explain snapraid and union?

    If i remember correctly, any deletions or changes will not be permanent until you "synch" to update your system. Union is for pooling and creating volumes that can be distributed to different drives , but accessed through one interface. Also, my understanding is that the union will distribute files to most empty disk. SO, in short, you don't have to go from disk to disk to find a file or directory, instead they are presented in one interface.

    Please consider I am not a linux pro in any fashion, i am a photographer and this OMV nas storage system offered me the easiest to use option for novices like me. So, my short explanations may be invalid, but for me this is the best option for a NAS system.

    And, this forum is FANTASTIC for any level of help you may need...

    I have two boxes with no raid cards, not needed, one w/16gb ram, the other with 24gb/ram, one with two cpus and one with one cpu. both hardware works with nothing stressed. No raid hardware because this is feature of the software.

    One config is raid five w/four drives, and the other is six drives using snapraid and union (ext4 files). Provides all benefits of zfs... drives are not stripped, and easily swapped out if fails...two parity drives and 4 data drives provides protection and no dead files...

    The raid five box is faster because of striping drives, providing fast content for video and photoshop work and the snapraid box is for files that are rarely accessed, deep storage for archiving.

    Both boxes are connected to usb backup drives for easy to use rsynch backups as needed,

    Overall, best webgui interface and remember the original freenas developer dropped that project and started OMV because he felt more options with debian>...

    Just an update...i had 133 gb of log files (syslog &daemon.log)...deleted older ones, was then able to use first-aid, running "apt-clean" "clear wbgui cache", then able to open admin web gui, then i disable clamd "on access scans" for all drives, which stopped continuous erros in both log files... Now back to normal operation...

    No need to re-install...deleted old syslog and daemon.log files that were total 133gb...the ran firstaid to "apt clean", "clear webgui cache", then able to access web gui, then turned off clam "on access scans"...this stopped continuous errors in both log file...

    Back to normal with no need to re-install!!!

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:56:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:15 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    Mar 22 10:57:45 nwvault monit[904]: Queued event file: unable to read event size -- end of file

    thnks for help... i have got to the log directory which has 133 gb folder size.

    The syslog shows virtually continuous errors from "monit"..


    monit[904]: Queued event file: unable to read event size -- end of file


    i do not know what that means...

    hi, my os drive is full from some glitych or something... no space on disk. This is omv 4, with snapraid, unionfs, ext4 drives both data and 2 parity drives.

    I can access system using putty, and see the shared directories from the network as well as "file commander" (?), but there is no gui interface. I also have a backup.

    I need advice on best practices and anticipated sequence of restoration and expectations.


    I gather from research in this forum that omv/snapraid can be re-installed using existing data disks. Will the new install simply mount the drives and recognize names, and whether they are data or parity drives, etc, using the original configurations?

    I understand that disconnecting data and parity disks during the omv install is recommended. Is snapraid then installed with no disks, after which the disks are reconnected? Or are the data disks connected prior to snapRaid installation. This system was flawless for three years but after I installed bitWarden, started to have multiple issues immediately and within a couple days the os disk was full. I assume from continuous error logging.

    Any info is appreciated.

    Oops, i think this is my OS disk...240gb


    df -hx --max-depth=1 /

    Filesystem Size Used Avail Use% Mounted on

    /dev/sda1 213G 212G 0 100% /


    What to do with this issue?


    Shared folders are intact and can be seen from other drives.

    I am using omv4, snapraid, unionfs with ext4...

    Been having trouble logging in - not accepting passwords, but just relooping to loggin without erros. Logged in ssh as root and tried omv-first to change password. This failed, saying there was no disk space. The OS disk is ssd 120gb, so doesn't seem that this would be full.

    From omv-firstaid ran apt clean, now login gives me this:


    Error #0:

    OMV\Exception: Failed to read file '/var/cache/openmediavault/cache.omv\controlpanel\login_js.json' (size=0). in /usr/share/php/openmediavault/json/file.inc:207


    I can see files from other computers. I have re-booted several times from the ssh interface, but the error does not change...


    Any ideas?

    root@nwvault:~# df -i

    Filesystem Inodes IUsed IFree IUse% Mounted on

    udev 2046509 497 2046012 1% /dev

    tmpfs 2051518 777 2050741 1% /run

    /dev/sda1 14221312 133126 14088186 1% /

    tmpfs 2051518 1 2051517 1% /dev/shm

    tmpfs 2051518 3 2051515 1% /run/lock

    tmpfs 2051518 16 2051502 1% /sys/fs/cgroup

    tmpfs 2051518 11 2051507 1% /tmp

    label-Archives_Four:label-Archives_One:id-ata-TOSHIBA_HDWD130_X7V0PPGAS:label-Ar chive_Five 732594176 345206 732248970 1% /srv/0808f8e1-f56e-4184-b872-19fd729 aec5d

    /dev/sdc1 183148544 14 183148530 1% /srv/dev-disk-by-label-Archives_Two

    /dev/sdf 183148544 14 183148530 1% /srv/dev-disk-by-label-Archives_Thre e

    /dev/sdd1 183148544 31798 183116746 1% /sharedfolders/snapfour

    /dev/sde 183148544 5680 183142864 1% /sharedfolders/snaptwo

    /dev/sdb1 183148544 271065 182877479 1% /sharedfolders/snapone

    /dev/sdg 183148544 36663 183111881 1% /srv/dev-disk-by-label-Archive_Five



    does this mean anything?