Updated snapraid to 11.5 from 11.3. Repeated delete pairity file, content files, and rescan. Still ran out of pairity.
5194 MiB used? I only have 1.4T of data... This thing was supposed to use 1G mem for 16T of data. I'm confused
Log file starts.Codemsg:fatal: WARNING! Content file '/srv/dev-disk-by-label-3Ta/snapraid.content' not found, trying with another copy...msg:fatal: WARNING! Content file '/srv/dev-disk-by-label-700g/snapraid.content' not found, trying with another copy...msg:verbose: Excluding directory '/srv/dev-disk-by-label-3Ta/lost+found' for rule 'exclude lost+found/'
Middle of the of the log file, after scanning the disks, until outofpairity is reached:Codemsg:verbose: Excluding directory '/srv/dev-disk-by-label-500g/lost+found' for rule 'exclude lost+found/'split:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:3298534883328: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:3023656976384: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2954937499648: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952790016000: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952521580544: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952387362816: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952320253952: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952311865344: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952309768192: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952308719616: failed with error No space left on devicesplit:grow:/srv/dev-disk-by-label-3Tb/snapraid.parity:2952308195328: failed with error No space left on device
I'm crashing snapraid. This is in OMV5. I did test runs on the system with a 1T drive as the pairity and only the 500 and 700g as data & content, and things worked ok.
I upgraded drives and now have 3 drives for data and content (500Gb, 700Gb, and 3T) and one 3T drive for pairity. Since then, every time I run sync it uses all the memory and crashes oom error, unless I turn on swap, in which case it swaps its ass off and never completes the sync. I just started 'fresh' with block size increased from 256 to 512, deleted pairty & content files, and ran a new sync. Same thing this time, high mem usage and load for hours, but now the the pairity drive filed up completely. I don't get it, as I only have 1.5T of data, mostly music and movies. I shouldn't be using this much memory or pairity space.
Union1 is simply a mergerfs pool of the 3 data drives, not included in the snapraid config.
snapraid config from omv interfaceCode
What the heck am I doing wrong?
I've been suffering this, but my old junk is maxed out at 4G. Next step is changing the hash size and re-syncing.
Wow. How are those Pis so good? Seems like they can run a lot of stuff pretty efficiently.
My old junk dual core 4G ram is getting slower by the day. Feels like I have a tire slowly leaking air
I get that all the time when the machine is at high load. I probably don't have enough ram. Did your changes succeed?
These guys are a helpful bunch and there's a lot of good info here.
What you are building sound like what I built, but better. My core2duo with 4G ram and a few random drives got me going. I added a few new 3T drives to get something with low hours into the machine. One of those is my snapraid parity drive. I worry less about the most recent changes not being 'covered' by a snapraid sync. And as ryecoaaron said, do rsync 'between' snapraid syncs.
I think you should be fine, but I am having load issues when channels dvr, plex, urbackup, and snapraid are trying to work at the same time. I'm considering a bit more ram. I've had no issues serving other types of files in addition to the media I have stored.
I am not doing any real transcoding on the machine. But, you could get a reasonable video card for $40 and off-load those duties.
macom, let's say it wasn't an environmental variable.
example, I ran my plex container recently using
which greatly reduced timeout related log entries under heavy cpu load (when snapraid is working). But, I can't figure out how to set up the same health timeout via the portainer interface. Any experience with this? I messed around with the command and entrypoint, but was not successful.
Docker CLI is pretty deep geek IMO I have the same struggle with docker, but somehow get along ok with OMV.
Good luck garyi.
I found your thread looking for the same sort of info - something I ran from docker command line 'works', where as when I try to kick it off using the portainer interface, the results are not as positive. So much to learn...
ok, for the record...
I realized I can GIVE the swap any UUID, so I gave it the original UUID it had before I turned on folder2ram. I uncommented the original line in fstab. no error at reboot, no log entries every 30 min about swap. There had to be something else still looking for that original UUID at boot. And, damn, it's handy. Snapraid used the shit out of the swap yesterday for a full sync. Really glad I got it back.
bob is now my uncle
maybe I don't. But I was needlessly running folder2ram when I have the system on a standard hd. I'm running on 4. When Snapraid is cranking through a sync, I would often get into out of memory situations. Right now, with a bit of swap, it seems happy with a sync running.
Its funny, swap is active and functioning, but the syslog is still showing those swap/start failed with dependency errors every 30 minutes
gderf, no resume file. Per the Flash Memory plugin 'optional' instructions:
7. If you disable swap, initramfs resume should fixed to avoid mdadm messages.
- Remove the resume file:
rm /etc/initramfs-tools/conf.d/resume Update initramfs:update-initramfs -u
Which, I did seemed like a good idea at the time.
ryecoaaron, that's how I got the UUID back for sda5. before, it was just sitting there, no UUID, just a PartID. It has no LABEL, if that matters. So this time, I did swapoff, then mkswap -L swap -U "put-in-the-uuid-I-already-had" and it succeeded. Then, swapon gave me an error, couldn't find the swap device. So, I did swap on, using the -L swap option and -f, which is some sort of reinitialize. Seems to be working, but no idea yet if it holds through a reboot.
It doesn't hold through reboot. I get "a start job is running for /dev/disk/by-uuid... " for the UUID of the swap area. After boot finishes, processes show swap doing nothing until after I run swapon. I triple checked the new UUID I put in fstab. Wondering what else I may have missed?
I got a new UUID assigned to the former swap partition and edited fstab to use thew UUID. At reboot, it timed out waiting for that 'device' to activate, and moved on with normal booting. In processes, OMV says swap is 0 bytes. I used swapon /dev/sda5 at the command line, and now OMV shows swap active. Time for a reboot.
ryecoaaron, I don't think I got the swap back.CodeMay 16 19:10:35 openmediavault systemd: dev-disk-by\x2duuid-33ab2122\x2d943c\x2d4b08\x2d80a1\x2dcbe6051adef4.device: Job dev-disk-by\x2duuid-33ab2122\x2d943c\x2d4b08\x2d80a1\x2dcbe6051adef4.device/start timed out.May 16 19:10:35 openmediavault systemd: dev-disk-by\x2duuid-33ab2122\x2d943c\x2d4b08\x2d80a1\x2dcbe6051adef4.swap: Job dev-disk-by\x2duuid-33ab2122\x2d943c\x2d4b08\x2d80a1\x2dcbe6051adef4.swap/start failed with result 'dependency'.
I'm getting this now every 30 min in my logs. I read this long string is the UUID of the original swap space.
fstab:Code/dev/disk/by-label/500g /srv/dev-disk-by-label-500g ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/700g /srv/dev-disk-by-label-700g ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/1T /srv/dev-disk-by-label-1T ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/3Ta /srv/dev-disk-by-label-3Ta ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/3Tb /srv/dev-disk-by-label-3Tb ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/srv/dev-disk-by-label-500g:/srv/dev-disk-by-label-700g:/srv/dev-disk-by-label-1T:/srv/dev-disk-by-label-3Ta /srv/2b1b4b4a-057f-485e-9085-2b7d5b0c26f3 fuse.mergerfs defaults,allow_other,c$
result of blkid:Code/dev/sda1: LABEL="system" UUID="3f541e90-00e0-447b-afc1-ce422a3df23d" TYPE="ext4" PARTUUID="8f80a8c1-01"/dev/sdb1: LABEL="500g" UUID="71860502-8460-4b63-92d1-5a7a39c1dde6" TYPE="ext4" PARTUUID="5f9fe823-d5ca-4baf-b5d6-2a8bc483a534"/dev/sdd1: LABEL="1T" UUID="28a3020b-db12-4565-805b-b9336b404670" TYPE="ext4" PARTUUID="ade28930-3784-4744-a900-cda819ef6d0d"/dev/sdf1: LABEL="3Tb" UUID="c9dc50a0-5768-4a3e-af18-87cf87b91aa8" TYPE="ext4" PARTUUID="79b20241-3963-45d5-adfc-51338193238f"/dev/sde1: LABEL="3Ta" UUID="bd940368-d704-4831-a0e6-f350b0d13e49" TYPE="ext4" PARTUUID="79f183d1-d8a9-4533-ace6-23a7db705655"/dev/sdc1: LABEL="700g" UUID="48506cf6-ef3f-4c08-abcf-83a33d3a29b5" TYPE="ext4" PARTUUID="e0490ce7-6245-45ba-8634-b05c814c77a8"
So, maybe the sda5 was the old swap space?
A guy in this thread has a procedure which looks like a likely template, but I'm nervous about gparted...
folder2ram 2.0G 32M 1.9G 2% /var/log
folder2ram 2.0G 0 2.0G 0% /var/tmp
folder2ram 2.0G 2.8M 2.0G 1% /var/lib/openmediavault/rrd
folder2ram 2.0G 20K 2.0G 1% /var/spool
folder2ram 2.0G 25M 1.9G 2% /var/lib/rrdcached
folder2ram 2.0G 4.0K 2.0G 1% /var/lib/monit
folder2ram 2.0G 1.6M 2.0G 1% /var/cache/samba
I've got 4G, so I imagine it tries to allocate 50%. I'm gonna shut it off and see how it impacts performance & oom errors.
I don't have a ton of memory (4G), and I keep running into out of memory 'freezes'. I'm wondering if turning off folder2ram will help. I have a conventional disk, so I don't really need it. Just turned it on as a newbie might twist all the dials to see what they do
One, how can I best identify the impact on ram usage?
Two, what is the correct order of operations to disable folder2ram if I did the fstab modifications in the GUI notes? (Add noatime and nodiratime to root options, comment out the swap partition)
My first guess would be to reverse the edits, then reboot, then disable the plugin.
Netdata is looking pretty cool.
I got frontail working, it's lightweight. gotta experiment.