[HowTo] SnapRAID in OMV

  • auanasgheps


    Thank you so much for your response! I was able to run the script and this is what I got:


    Code
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'SnapRAID Script Job started'
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'Running SnapRAID version 11.5 '
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'SnapRAID Script version 3.1.DEV4'
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'Script configuration file found.'
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'Checking SnapRAID disks'
    [2021-09-12 03:00:01] snapraid-aio-script.sh: INFO: 'Checking if all parity and content files are present.'
    [2021-09-12 03:00:10] snapraid-aio-script.sh: WARN: 'Parity file (/srv/dev-disk-by-uuid-5b81f063-1e18-4988-b250-22b75df42ce3/snapraid.parity) not found!'
    [2021-09-12 03:00:10] snapraid-aio-script.sh: WARN: 'Please check the status of your disks! The script exits here due to missing file or disk.'


    I currently have no files on those disks. Is that the reason it failed? I am moving some files now from another location and should be able to test the script again afterwards. EDIT: I added the files and had the same result. Does the sync need to be run manually once before the script takes over?


    Is it normal to not receive an email if the script fails? Also, is the email in the config script supposed to include the "" (e.g., "myemail@email.com") or entered without them?


    Do you know if there is a guide out there somewhere about what to do with SnapRAID from the OMV5 GUI in case of a disk failure? I was thinking that since I am migrating data from another OMV installation running SnapRAID, that I could use that hardware to simulate a drive failure and see how the process would flow. This would help me be ready in case it ever happens in real life.

  • Does the sync need to be run manually once before the script takes over?

    The script expects to find parity files on parity disks. Therefore, if you're starting with fresh disks, do a manual sync, can be done via the Snapraid section in OMV GUI.

    Since I always worked with my existing array I never thought about this. I will add this to the documentation, thanks for reporting.


    Is it normal to not receive an email if the script fails?

    This script relies on the system SMTP, so is your SMTP configured in OMV?

    Also, is the email in the config script supposed to include the "" (e.g., "myemail@email.com") or entered without them?

    Keep the ""

    Do you know if there is a guide out there somewhere about what to do with SnapRAID from the OMV5 GUI in case of a disk failure?

    I'm not sure but I would recommend reading the SnapRAID manual.
    Mostly it's a matter of running the "fix" command.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    2 Mal editiert, zuletzt von auanasgheps ()

  • auanasgheps


    Thanks for the info! I think I may have to properly configure SMTP for the email aspect of it to work.


    Regarding simulating a hard drive failure, I am a bit confused. I started by disconnecting one drive that contains data and powering on the server. It shows as missing from the File Systems area and "n/a" under Branches in the Union Filesystems area. If I run a Status command under SnapRAID in OMV, no issues show up and the drive still shows when I run a Devices command. Now, if I run a Fix command, I get the following error:


    Code
    Reading data from missing file '/srv/dev-disk-by-label-sdc/Media/Movies/movie.avi' at offset 1696858112.
    Error writing file '/srv/dev-disk-by-label-sdc/Media/Movies/movie.avi'. No such file or directory.
    WARNING! Without a working data disk, it isn't possible to fix errors on it.
    Stopping at block 16499
    
        6474 errors
        6473 recovered errors
           1 UNRECOVERABLE errors
    DANGER! There are unrecoverable errors!


    So it detects that a file is missing and starts writing it back but it fails. Is this due to not having replaced the disk that was removed to simulate the failure with another working disk? If so, does the disk need to be the same size as the previous one or can it be smaller as long as the data being replaced fits on it? Will SnapRAID just rebuild the files onto that disk or can it rebuild them onto other disks in the mergefs setup?

  • There is a very clear procedure for what you are trying to do given in the SnapRaid manual, but it seems you are not following it.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • I have read the SnapRAID manual, but that does not mean someone without much experience (like me) will be able to understand all of the things that are mentioned there, especially when it is written from a CLI standpoint. Hence, I come to these forums for assistance before I mess up something and make my data unrecoverable.


    You are very likely referring to the 4.4.1 -Reconfigure Step. For example, it states "Change the SnapRAID configuration file to make the "data" or "parity" option of the failed disk to point to the place where you have enough empty space to recover the files". The first issue is that I have no idea where to find and how to modify the SnapRAID configuration file. Can this be done from the GUI? The other question that I have is regarding the space. Yes, I know I need to find a location for the files to be rebuilt that has enough space for those files. The problem is, how do I tell SnapRAID from the GUI to rebuild what used to be sdc into the space that is left in both sda and sdb, which are part of my original 3 content and data disks in SnapRAID (i.e., sda, sdb, and sdc)? I mention both sda and sdb, because I may not have enough space for the data that was on sdc in one of those single drives. Let's look at a different scenario and say there is enough space. How do I modify the SnapRAID configuration file that currently points to sdc as a data/content disk to now point to sda for the place to recreate that data? Should I just edit sdc under Drives and change the drive to sda? Will I end up with 2 sda drives in the SnapRAID configuration?


    I think these are valid questions and I have always appreciated the support I have received in these forums. Again, this is just an exercise to make sure I am able to handle an actual failure when it happens.


    Thanks in advance for your help!

  • First off, although this is rarely mentioned, some prior experience with the Linux command line shell is going to be needed if you are going to work under the hood of a product like OMV. The OMV GUI can not do everything for everybody. You probably don't want to hear this but it is a fact of life around here.


    The file you need to modify is /etc/snapraid.conf Elevated (root) privileges are needed to modify this file. Take notes so you can restore things you changed as these changes are only temporarily needed. The easiest way to do this is to use the comment character (#) to comment out a line you wish to change and then type in a new line, changed as needed directly below. There are already many uses of comments in the file so just look, see, and understand.


    If you are trying to recover an entire lost drive, then the space needed is another empty formatted and mounted drive of at least the same size as the lost drive. I do not believe it is possible to span a recovery across multiple drives.


    You do not make any changes to the configuration in the OMV GUI to perform the recovery of a failed drive. It's all done in the shell.


    Also, SnapRAID as implemented in OMV does not use /dev/sdx device labels in its configuration. It uses by-label or by-uuid specifiers. Look in the snapraid.conf file and see.


    Then you should follow the example in the manual section 4.4


    There is a SnapRAID user forum you can look thru or ask questions in, but do keep in mind the assumption that some prior Linux shell experience is assumed.


    See: https://sourceforge.net/p/snapraid/discussion/1677233/

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Thank you for the information gderf ! It ended up being very useful.


    In a way I was "forced" to restore the full drive. I was initially going to do this and disconnected the drive to simulate a failure. However, I decided against it and just test out the "undelete" option on a few files. The thing is, when I reconnected the drive it was showing in OMV, but the file system was missing. I searched but could not find a way to tell OMV that this disk already had a file system and files on it. So I ended up wiping it and restoring from SnapRAID, which took a while and worked fine.


    Now, I have two questions:


    1. First, in the event this ever happens to me again, either by my own making or some failure, what would have been the proper steps to get my partition mounted again in OMV without having to format the drive? My data was there for sure and the drive was properly partitioned (I just unplugged and replugged it when the server was shut down), but I believe OMV lost the connection to the partition when the drive was assigned a new UUID upon reconnection.
    2. What would be the process to restore the data to the space left on my other to HDDs? Right now I have 5 8TB disks. Three of them are in mergerfs and these are set up properly with SnapRAID. The other two are parity drives. With the way I have mergerfs set up, each drive has about 3TB of data in it. If I remove one of them permanently, how do I restore that data to the available space in one of my other data HDDs (which should be sufficient for the 3TB)?

    Thank you!

  • Not enough information provided to say much about why your disk turned up missing. Does the drive appear in the Disks list in OMV? If not this is most likely a hardware problem and the filesystem page will not show the partition. Also, filesystems mounted in the CLI will not appear in OMV's Filesystems page and will not appear in any of OMV's drop down selection lists in various areas within OMV. Filesystem UUIDs are created when a partition is first formatted. These UUIDs do not change by merely unplugging and replugging a drive.


    The SnapRaid manual section 4.4.1 gives an example of how to recover a disk. In that example it shows the destination where the data will be restored to which is arrived at by editing the snapraid.conf file as shown. You can set that destination to whatever you want. It could be the mountpoint for a newly added and formatted disk, or the mount point for some other already existing disk.


    There is no linkage between mergerfs and SnapRaid.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • Yes, the device appeared under Disks in OMV. When I ran blkid the hard drive was showing. When I ran lsblk the hard drive was showing but had no partition associated with it. It is as if it lost the partition. I am sure my data was there. Is there a way to recreate the reference to a partition in this type of situation? After SnapRAID restore, hard drive has been working fine, just as before. I am really not sure what happened here, since all I did was unplug and then later replug a drive while the server was powered off.


    Next, I will try the to restore a missing drive to the available space on my disks and will post of any issues.


    auanasgheps


    The SnapRAID script is working wonders! I understand that there is an issue with hd-idle when the SnapRAID script is used. I have hd-idle set up and it was working as expected, but when the SnapRAID script wakes up the disks, they fail to spin down again after the script is done and the allotted time has passed. EDIT: I have tested it several times and it is working fine. HDDs spin down after the SnapRAID script runs and the allotted hd-idle time runs out.


    Thanks everyone!

  • You can use hd-idle on its own.

    The script could also use hd-idle to immediately spin down drives after operations, but this part doesn't work: spins down but immediately after the drives are enabled. I still haven't managed to sort this issue :(

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

  • You can use hd-idle on its own.

    The script could also use hd-idle to immediately spin down drives after operations, but this part doesn't work: spins down but immediately after the drives are enabled. I still haven't managed to sort this issue :(

    Not sure if this will help or not, but this is what I did. Before even setting up SnapRAID I had installed hd-idle many months ago per your instructions (Thanks!!) and it was working fine. It was working fine with SnapRAID set up too. Now, once I began running the SnapRAID script, the drives were not spinning down. First thing I did was double-check the script config file and I had it set up to not spin down, so I obviously changed it. That did not change the outcome. It showed in the log file as spinning the disks down, but they never did. All along, hd-idle had been running in the system. Next, I checked the script file and realized that the spindown method 1 was the one that was set active by default. I commented it out and set the method 3 for hd-idle as the active one. It started working after that without issues. Have tested it about 8 times already, with multiple reboots. In summary: Method 1 (SnapRAID spindown) + hd-idle (previously set up and configured) = not working; Method 3 (hd-idle spindown) + hd-idle (previously set up and configured) = working!

  • Not sure if this will help or not, but this is what I did. Before even setting up SnapRAID I had installed hd-idle many months ago per your instructions (Thanks!!) and it was working fine. It was working fine with SnapRAID set up too. Now, once I began running the SnapRAID script, the drives were not spinning down. First thing I did was double-check the script config file and I had it set up to not spin down, so I obviously changed it. That did not change the outcome. It showed in the log file as spinning the disks down, but they never did. All along, hd-idle had been running in the system. Next, I checked the script file and realized that the spindown method 1 was the one that was set active by default. I commented it out and set the method 3 for hd-idle as the active one. It started working after that without issues. Have tested it about 8 times already, with multiple reboots. In summary: Method 1 (SnapRAID spindown) + hd-idle (previously set up and configured) = not working; Method 3 (hd-idle spindown) + hd-idle (previously set up and configured) = working!

    Thanks for letting me know! That's a great news! :) I have updated the script. Can you please try the latest version? I've made also some formatting improvements. Just replace the script file and use your current config file.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    2 Mal editiert, zuletzt von auanasgheps ()

  • Thanks for letting me know! That's a great news! :) I have updated the script. Can you please try the latest version? I've made also some formatting improvements. Just replace the script file and use your current config file.

    Sorry this took a while, but it was a busy week for me. I've tested the new version of the script with my original config file and everything works great! Disks spinning down as expected. Please let me know if you need additional assistance testing anything. Thanks again for your efforts on this script!

  • auanasgheps is there a way to incorporate tmux into your aio script so that it is possible to check the status of the sync?

    Can you please elaborate better? I don't know Tmux.

    My scripting capabilities are limited, but the code is on Github.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    • Offizieller Beitrag

    tmux is not the right way. Piping stdout and stderr to a log file would be the better way.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Can you please elaborate better? I don't know Tmux.

    My scripting capabilities are limited, but the code is on Github.

    I'm not well versed in linux so my request might be a stretch.

    With tmux you can essentially resume from where you left off in terminal. A good overview is provided here

    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    Your snapraid script works and i run it overnight. The only issue is that sometimes when i wake up in the morning I'm not certain sure of how far along the script is. The only confirmation or status that you get is once the script is complete.

    • Offizieller Beitrag

    will stdout you to check the progress of the script when it has been started by cron?

    Yes. Anything you see on the screen is either stdout or strerr. If you send those both to a log file, nothing will be logged to the screen. The output is sent to the file in realtime. if you tail -f /var/log/name_of_log.log, you will see the same thing you would see if you rejoined a tmux session. The tmux session would be interactive but a cronjob should never be interactive.

    I'm not well versed in linux so my request might be a stretch.

    With tmux you can essentially resume from where you left off in terminal. A good overview is provided here

    tmux and screen do about the same thing. I screen often when I start a long running process especially over a connection that isn't good. They are meant for people not scripts.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Piping stdout and stderr to a log file would be the better way.

    That's what is happening already with the AIO Script. Thanks for explaning!


    By the way, if you want to track when the job has started, completed successfully or failed, I already integrated support for Healthchecks.io. Please read the GitHub page for more info.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!