Beiträge von bvrulez

    I changed from RAID6 to SnapRaid. I remember reading installation advice on RAID setup here that said "never set spindown" for HDD with autoshutdown. So I was running fine for some years having the system shut down after 30 minutes of beeing idle instead of having the disk beeing spun down and leaving the rest on power consumption.


    Now I changed my setup to SnapRaid and I wonder if I have to reset my spindown settings in Power Management. I decided so say "Minimum consumption WITH spindown" and set spindown to 10 minutes. The whole system shuts down after 60 minutes of beeing ide.


    I think this is better because SnapRaid will spin up only the one disk it needs if there is some access. I could even leave the whole thing on a little bit longer or maybe even the whole day because now the disks will spin down where as with a RAID all of the disks needed to spin up if there was access and this was clearly too much power consumption.


    Is this a good setup?

    I added an older 3TB HDD to my SnapRaid and copied stuff there from my old RAID6. Then I made a sync which gave some I/O errors at the beginning but then ran fine for hours over the 3TB.


    After this was done it did snapraid status which gave:


    I now wonder what exactly an error means and how snapraid even discovered those. If there was an I/O error on the initial sync how can the parity be correct? Or did the I/O error simply result in an error beeing logged/synced instead of the real data.


    I don't care so much about those blocks of data since it is from older stuff anyway but I would like to interpret the message.


    On the other hand the drive I inserted seems unable to successfully run a SMART test (fatal error) so I will switch it out anyway. Should I snapraid -e fix the errors before? I am unsure if this would make anything better or worse since I am not sure if the fix will result in good data. Or would a fix try to re-read the faulty blocks again?


    Thanks!

    For the people who search like me for a reason why a shared folder is still referenced but have no service running using it:


    In my case I had an entry in user submenu user home directory where to create those. The item was disabled but was still the reason for the referencing.

    I solved the problem with a workaround that seems to be standard in higher versions of OMV than mine:


    Adding x-systemd.requires to the options for the union filesystem from the plugin window. So for my union of two disks the options are

    defaults,allow_other,use_ino,category.search=newest,x-systemd.requires=/srv/dev-disk-by-label-HDD1,x-systemd.requires=/srv/dev-disk-by-label-HDD2


    The only thing I now have to remember is to add this for each new disc of the union. I suggest putting a comment into /etc/fstab to remember this.

    I experienced this issue after creating a union filesystem today. In my instance, it was mounting the fuse point before the underlying disks were mounted. It immediately after attempted to mount the sharedfolders, which failed because no data was there yet. Only sometime after the sharedfolders failed did it mount the disks.


    I was able to solve this by adding x-systemd.requires to the options for the union filesystem from the plugin window. So for my union of two disks the options are
    defaults,allow_other,use_ino,category.search=newest,x-systemd.requires=/srv/dev-disk-by-label-backup,x-systemd.requires=/srv/dev-disk-by-label-backup2
    Now they seem to be loading fine.

    I had the same problem and this solution works fine and is less intrusive than all the other ones. The only thing I now have to remember is to add this for each new disc of the union. I suggest putting a comment into /etc/fstab to remember this.

    gderf


    After shutdown by autoshutdown, the /sharedfolders/Union is again not mounted.


    My /etc/fstab looks like this:

    Code
    # >>> [openmediavault]
    /dev/disk/by-label/Barac /srv/dev-disk-by-label-Barac ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/Barac2 /srv/dev-disk-by-label-Barac2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/3TB01 /srv/dev-disk-by-label-3TB01 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/BenRAID6 /srv/dev-disk-by-label-BenRAID6 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    //192.168.100.90/hgst/HGST /srv/6e1d9c6a-b7fa-4008-9b27-bc1c5f07bf49 cifs credentials=/root/.cifscredentials-f103d6d5-714a-46ea-8638-d4729bc532a7,_netdev,iocharset=utf8,vers=2.0,nofail 0 0
    /srv/dev-disk-by-label-3TB01:/srv/dev-disk-by-label-Barac2 /srv/333d269a-8fbc-4994-b2d0-af41e84909b2 fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=mfs,minfreespace=4G 0 0
    # <<< [openmediavault]

    The share is in config.xml:


    Code
    <sharedfolder>
    <uuid>2ce306c6-18a4-42c5-8aa9-0a39dcbdcfe2</uuid>
    <name>Union</name>
    <comment></comment>
    <mntentref>0f4544d6-0edc-4c36-bd96-d493e9bc1f5d</mntentref>
    <reldirpath>Union/</reldirpath>
    <privileges></privileges>
    </sharedfolder>
    </shares>


    Maybe it is not mounted on startup because some other mount takes too long?

    I double-checked it again:


    /sharedfolder/Union does not show any content (because it seems not to be mounted to the actual UnionFS) from the command line.


    But the samba share that uses this exact shared folder Union [on Union, Union/] works as expected, shows content and is mounted when accessed from a client.


    Maybe someone can at least confirm this.

    I have a setup with snapraid and union filesystems, the latter beeing mergerfs by default as far as I know.


    I have three disks in my snapraid, one beeing parity. With the two data disks I set up a union filesystem.


    I set up a shared folder pointing to the union filesystem.


    This results in an entry /sharedfolders/Union. However, if I copy something to this folder with midnight commander it is not copied to the union filesystem but to the mountpoint on the system SSD. It does work when copying over samba, though. On the other hand I have other entries in /sharedfolders which are also samba shares and those are mountpoints which I also can use on the command line.


    If I copy something to the /sharedfolders/Union folder it will be copied to the local system disk because there is nothing mounted at this mountpoint. But if I access this via samba from a client it works as expected, copying the stuff to the union filesystem.


    If I want to use the union over the terminal I have to copy to /srv/333d269a-8fbc-4994-b2d0-af41e84909b2/Union. It then seems to work as expected, putting the files on the drive with the most free space.


    On the other hand I see the file snapraid.parity with 10TB of size in /srv/333d269a-8fbc-4994-b2d0-af41e84909b2 (the union mount) which is odd, because this parity file is located on the parity disk which I did NOT include in the union filesystem. How does it go there? Does snapraid automatically put a link to it on every disk?


    Can someone please explain this behaviur?

    When I start the sync again the high load comes back. Sync started with over 20MB/s and is now at 18 or 17. Memory usage is at 30% (4GB RAM). This indicates the disk write process is problematic, right? But when I initially copied all my data to the data disk I had no problems and 10TB took just 17 hours or so in total. Maybe the parity disk is broken. But if the sync is already at 75% after some days it's not THAT bad...

    I checked CPU usage and recognised high I/O load. I had this once with a failed drive but since the drives are new I don't think that is the case. So I restarted the array and since then the high load is gone. I came across the SnapRaid manual and did a `snapraid status` which gave the info that a sync is ongoing and 75% of my array are syncd. I will check if this value goes up while load stays low. At the moment I don't see a `snapraid` process running in `top` so I suspect the load is low because the sync is on halt.

    I also checked the system protocol and found nothing special. My disks are pretty full (over 90%) and this generates a monit warning for both of them.

    I newly installed a SnapRaid and I recognised that I have a higher load average than before (when I only had a RAID6). I now wonder if this is due to the initial synchronizing of the two disks (one parity, one data, for now) or if it is normal that SnapRaid under idle circumstances uses 20% of my memory and about 10% of CPU. I recognised this because my autoshutdown is not activating any more due to the load average. I can of course raise it from currently 40 to then 200 maybe. But I wonder if this would do some harm to the initial sync. But I think it might not have anything to do with syncing because there is no HDD activity. So I wonder why Snapraid is using this much CPU time.