I intentionally moved a lot of files from one folder to another one in my Snapraid setup. The second folder is on a USB disk (which is part of the Snapraid). I wonder why I got error messages regarding the missing files on the next sync. Is there a threshold after which there will be error messages? Because otherwise I would expect an error message for every single file that gets deleted (intentionally). Or is it due to the fact that I moved the files and that I moved them to a USB disk? I already know that due to the USB connection the Snapraid will be unable to recognize moves.
From this question and the answer by frostschutz I know that it should be possible to configure long smart tests that are partial. Is there a way to set this up in openmediavault? First and foremost, is `smartd` running with `--savestates` and can I manually configure extra functionality on the command line in addition to the GUI setup?
PS: The question/answer might be old, but I got a recent reply from the guy and he still uses it this way.
BTW: I am trying to do this because the smart test I did with a 14 GB USB drive (Elements) did not finish. After it failed due to `aborted by host` at 90% to go for the first time I used a skript to keep the disk awake. But it did still leave the last `10%` unfinished for several hours.
Also, can you please remove the moving bell from the forum website? A red icon is enough.
I changed from RAID6 to SnapRaid. I remember reading installation advice on RAID setup here that said "never set spindown" for HDD with autoshutdown. So I was running fine for some years having the system shut down after 30 minutes of beeing idle instead of having the disk beeing spun down and leaving the rest on power consumption.
Now I changed my setup to SnapRaid and I wonder if I have to reset my spindown settings in Power Management. I decided so say "Minimum consumption WITH spindown" and set spindown to 10 minutes. The whole system shuts down after 60 minutes of beeing ide.
I think this is better because SnapRaid will spin up only the one disk it needs if there is some access. I could even leave the whole thing on a little bit longer or maybe even the whole day because now the disks will spin down where as with a RAID all of the disks needed to spin up if there was access and this was clearly too much power consumption.
Is this a good setup?
I added an older 3TB HDD to my SnapRaid and copied stuff there from my old RAID6. Then I made a sync which gave some I/O errors at the beginning but then ran fine for hours over the 3TB.
After this was done it did snapraid status which gave:CodeThey are from block 42264 to 50019, specifically at blocks: 42264 42265 42266 42267 42268 42295 42296 42297 42298 45004 45005 45006 45007 45008 45009 45010 45011 46764 46765 46766 46767 46768 46769 46770 46771 46772 49256 49257 49258 49259 49260 49261 49262 49263 49264 49265 49266 49267 49268 49269 49273 49274 49275 50012 50013 50014 50015 50016 50017 50018 50019
I now wonder what exactly an error means and how snapraid even discovered those. If there was an I/O error on the initial sync how can the parity be correct? Or did the I/O error simply result in an error beeing logged/synced instead of the real data.
I don't care so much about those blocks of data since it is from older stuff anyway but I would like to interpret the message.
On the other hand the drive I inserted seems unable to successfully run a SMART test (fatal error) so I will switch it out anyway. Should I snapraid -e fix the errors before? I am unsure if this would make anything better or worse since I am not sure if the fix will result in good data. Or would a fix try to re-read the faulty blocks again?
For the people who search like me for a reason why a shared folder is still referenced but have no service running using it:
In my case I had an entry in user submenu user home directory where to create those. The item was disabled but was still the reason for the referencing.
I solved the problem with a workaround that seems to be standard in higher versions of OMV than mine:
Adding x-systemd.requires to the options for the union filesystem from the plugin window. So for my union of two disks the options are
The only thing I now have to remember is to add this for each new disc of the union. I suggest putting a comment into /etc/fstab to remember this.
I experienced this issue after creating a union filesystem today. In my instance, it was mounting the fuse point before the underlying disks were mounted. It immediately after attempted to mount the sharedfolders, which failed because no data was there yet. Only sometime after the sharedfolders failed did it mount the disks.
I was able to solve this by adding x-systemd.requires to the options for the union filesystem from the plugin window. So for my union of two disks the options are
Now they seem to be loading fine.
I had the same problem and this solution works fine and is less intrusive than all the other ones. The only thing I now have to remember is to add this for each new disc of the union. I suggest putting a comment into /etc/fstab to remember this.
After shutdown by autoshutdown, the /sharedfolders/Union is again not mounted.
My /etc/fstab looks like this:Code/dev/disk/by-label/Barac /srv/dev-disk-by-label-Barac ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/Barac2 /srv/dev-disk-by-label-Barac2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/3TB01 /srv/dev-disk-by-label-3TB01 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2/dev/disk/by-label/BenRAID6 /srv/dev-disk-by-label-BenRAID6 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2//192.168.100.90/hgst/HGST /srv/6e1d9c6a-b7fa-4008-9b27-bc1c5f07bf49 cifs credentials=/root/.cifscredentials-f103d6d5-714a-46ea-8638-d4729bc532a7,_netdev,iocharset=utf8,vers=2.0,nofail 0 0/srv/dev-disk-by-label-3TB01:/srv/dev-disk-by-label-Barac2 /srv/333d269a-8fbc-4994-b2d0-af41e84909b2 fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=mfs,minfreespace=4G 0 0
The share is in config.xml:
Maybe it is not mounted on startup because some other mount takes too long?
I double-checked it again:
/sharedfolder/Union does not show any content (because it seems not to be mounted to the actual UnionFS) from the command line.
But the samba share that uses this exact shared folder Union [on Union, Union/] works as expected, shows content and is mounted when accessed from a client.
Maybe someone can at least confirm this.
I also had the wrong disk in the Union, which explaines the error the parity file. Sorry for the confusion, which arose from the fact that in SnapRaid I had set up the parity in the first disk which was odd for me.
No, that was a mistake, sorry. Changed it to 4.x.
Except locally. Then I might use either the /srv/nfs/someshare path or the /svr/crazypath/someshare path.
That's my point. Maybe it is natural, that on the server itself it is not possible to access the mergerfs filesystem via the /sharedfolders/Union path.
Well, but you probably have a samba share related to it, no? If so, you might check if you are able to copy something via the command line to that share's mountpoint or if it is just possible to copy it to /srv/... Would be interesting.
Well my system is working fine apart from this issue. But you might be right that I might have manually edited the /etc/fstab before. It is already good to know that your think that the mountpoint should normally work the way I expeced it to.
I have a setup with snapraid and union filesystems, the latter beeing mergerfs by default as far as I know.
I have three disks in my snapraid, one beeing parity. With the two data disks I set up a union filesystem.
I set up a shared folder pointing to the union filesystem.
This results in an entry /sharedfolders/Union. However, if I copy something to this folder with midnight commander it is not copied to the union filesystem but to the mountpoint on the system SSD. It does work when copying over samba, though. On the other hand I have other entries in /sharedfolders which are also samba shares and those are mountpoints which I also can use on the command line.
If I copy something to the /sharedfolders/Union folder it will be copied to the local system disk because there is nothing mounted at this mountpoint. But if I access this via samba from a client it works as expected, copying the stuff to the union filesystem.
If I want to use the union over the terminal I have to copy to /srv/333d269a-8fbc-4994-b2d0-af41e84909b2/Union. It then seems to work as expected, putting the files on the drive with the most free space.
On the other hand I see the file snapraid.parity with 10TB of size in /srv/333d269a-8fbc-4994-b2d0-af41e84909b2 (the union mount) which is odd, because this parity file is located on the parity disk which I did NOT include in the union filesystem. How does it go there? Does snapraid automatically put a link to it on every disk?
Can someone please explain this behaviur?
If I don't specify user home directories which `bashrc` or `profile` is used when I log into the system with ssh as this user and where is it stored?
My system is running fine. I don't have the "killed" on line 16 as you do.
When I start the sync again the high load comes back. Sync started with over 20MB/s and is now at 18 or 17. Memory usage is at 30% (4GB RAM). This indicates the disk write process is problematic, right? But when I initially copied all my data to the data disk I had no problems and 10TB took just 17 hours or so in total. Maybe the parity disk is broken. But if the sync is already at 75% after some days it's not THAT bad...