Posts by bunducafe

    Hi folks,


    Good news: Notifications are working. Bad news: They're working "too good".


    While performing a scheduled job with Snapraid commands I found 60 notification emails this morning. One would be enough to be honest, but it constantly sends out. Anybody ran into this issue and knows how to solve it?


    First email at 5:01 am - output:


    Code
    Self test...
    The lock file '/srv/dev-disk-by-uuid-e745f412-2c30-48fb/snapraid.content.lock' is already locked!
    SnapRAID is already in use!


    59 emails between 5:01 and 5:59am with this output:


    Additionally: When performing some scheduled jobs in the past I also got around 60 email notifications... 1 with the whole output of the command and 50+x stating that the lock file is already locked and SnapRAID is already in use.


    At the end I can delete the whole thing manually but I would expect just to receive one email with the output and nothing more ;) Any hints?

    Hi folks,


    is there any known issue whith LUKS when letting the HDDs spin down after a certain time AND pick the option 127 in the power management?! I had picked a spindown time of 240 minutes AND the 127 option which result in LUKs getting locked after the HDDs (only WD reds) are going into idle mode and power down...


    Further on I also have SnapRaid with MergerFS installed, but I did not find anything relevant in the logs, that's why I am asking here... meanwhile I have deleted the 127 option and everything is working fine again after I wake the HDDs from idle.

    I did handtype it from the very first time, too. There is a space after ";" in the forst command between "snapraid touch; snapraid sync" correct?


    I did set it up again, let's see what happens next time. Meanwhile I did successfully finish it with a manual run.

    I quite like that approach it is more or less the best use case for me as well. After setting up the scheduled tasks I am running into the following error sent by email:


    /bin/sh: 1: Syntax error: Unterminated quoted string


    I am not sure why the cronjob with the scrub command couldn't been executed due to a syntax error. What do I have to adjust or where can I look? In the system logs I did not find anything under snapraid.


    If I do a manual run within the GUI, everything completes just fine.

    While reading through some mac specific forums it seems that Apple had worked on its smb protocol... and not for the better unfortunately. So it seems that it is indeed a bug. I will try to adjust the version in the smb config when I am home again although I doubt it that it was version related... Anyhow, I will keep you posted.

    Hi there,


    Apologies right away I did not dig deeper so far, but after upgrading my old iMac 2015 to Big Sur my Samba shares are being unusable. I can mount the share but when opening a folder with 15 files and subfolders the beachball is spinning and nothing is loaded at all... I can access via ssh without issues of course and rsync from iMac to the OMV machine. Just connecting with samba does not work properly.


    Anyone with similar experiences? And if so maybe has a tweak or workaround for this OS?


    For now I am back to NFS which works just fine - and fast :)

    Why would you lose data? mergerfs works on top of other filesystems. Adding/changing/deleting pools makes no difference to the data on the underlying filesystems.

    Thank you. I meant rather if I could switch to mergerfs-folders and maintain the same state as with unionfs.


    I know that I won't lose data, but I maybe have to rearrange the whole file and folder structure... and if I could just avoid that I'd rather go this way ;)

    I don't know that there are any. Snapraid doesn't make a difference. Auto-unlock of LUKS might help. Maybe someday I will re-write the plugin to use systemd mount files but I don't want to do that and have it not work any better. LUKS needs to go away as an OMV option (I know LUKS works well - I use it at work a lot) in favor of a filesystem that offer encryption in one layer to avoid this double layer problem.

    Well, it works too well but maybe I should start the encryption process with the CLI.


    Last question: I have not dug into all the differences between UnionFS and the MergerfsFolder Plugin. Would it make a difference to go the mergerfs-folder route insead of unionfs? And if so: can that be done equally without hassle in order not to loose any data and make it acessible with the exact same state as unionfs?

    I ran into the exact same issue. Boot is flawless but the nofail prevents the UnionFS to mount the folders properly. I removed the nofail from the UnionFS settings, saved everything and the pool with the shared folders mounted as expected. After that I put in nofail again in order to be able to boot normally...


    Can this be done with an automated script after the decrytion of the hdds?

    I don't know that there are any. Snapraid doesn't make a difference. Auto-unlock of LUKS might help. Maybe someday I will re-write the plugin to use systemd mount files but I don't want to do that and have it not work any better. LUKS needs to go away as an OMV option (I know LUKS works well - I use it at work a lot) in favor of a filesystem that offer encryption in one layer to avoid this double layer problem.

    Thank you for the feedback and I totally agree with the encryption. I quite like LUKS (as it works fairly well) and I am rather reluctant to auto-unlock the encrypted harddisks so I will try the mentioned solutions... it's no big thing either if it works at the end. I am not consistantly reboot the machine but as I got stuck several times I just want to avoid the helios going into recovery mode.

    Does anyone wiht LUKS, snapraid and UnionFS have a suitable workaround for the boot issues? Or is just nofail and/or noauto the solution for now?


    I was fiddling around with my Helios64 as well but ran into that exact issue that I could not boot anymore but did not clearly saw that the combination of Luks and UnionsFS is probably causing this. As I had some important data to copy I did not swith my machine off since a week now but I would rather have a reliable system in the end that boots smoothly (either on emmc or sd).


    I have not tried nofail nor noauto so far, but will give it a shot on the weekend.

    You don't have to use the CLI. You can setup the rsync job in the GUI of OMV. Just define shared folders for source and target and setup the job.

    OMV is there to make things simple ;-)

    That's true. And it should go for big folders as well, correct? I recently ran into the problems with copying one big shared folder. Rsync just quit. Now I manually copy the files via SMB share to the new destination and then I will setup rsync again with different folders.


    Just to clarify: Rsync does the job without the need of being logged in into the OMV user interface, correct? Just if I run the job manually the OMV GUI has to remain open?!

    rsync is available from the GUI of OMV out of the box

    Indeed, but I just wonder whether it is reliable enough. I am currently trying to change the harddrives in my two HC2s. I did change one and wanted to get all data the from the old drive A to the new drive B, both machines running OMV. The data is approx. 3,5TB


    I configured rsync, but with the scheduled job it only copied for an hour or so. Then it stopped. If I execute the rsync job manually via the OMV GUI it runs but once the computer is going into standby rsync seems to stop, too. Is this behaviour intentional? I doubt it.


    I am now wondering whether my problems occur because the data is too big (which would be ridiculous) or if I have done something wrong in order to achieve a disco to be copied from machine A to machine B ;)

    Thank you for the +1. It's still with warranty so I probably send it in right away in order to get either reimbursed or exchanged for a new one.


    Meanwhile I was checking the disk also with smartmontools and it does not show any error... Isn't OMV also using smartctl for getting all the SMART details out of the HDD? I am bit confused now.

    I don't think bad sectors are actually repaired. But rather the drive will attempt to copy the data within them to spare sectors and prevent the bad sectors from being reused again in the future. It is for this reason that the drive will forever show that it has bad sectors and OMV will show this in the status.

    That means: I could leave it be as it is, correct? I am just a bit astonished that this message occured so quickly... I will check if I can replace my SanDisk...

    If you spin down the drive, the HC2 will use less than 5W. In Germany that is around 1€ per month. Lets say you can save around 1/3 by switching it off it is around 0.3€ per month.


    I am looking very much in saving (electrical) energy, but in this case ....

    Okay, you finally got me there... Then I will rather not mess around with it any longer, leave it be and only switch them off completely when really needed...