[HowTo] SnapRAID in OMV

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • jollyrogr wrote:

      Here's the script I was using.
      Thanks buddy, will test it tonight.

      If works good I guess we (actually you, if it's your work) could make a merge request to have it updated for everybody.

      EDIT: Looks like it's just an older version of the official one, no edits at all. It exactly matches this version of the script. Are you sure you got yours?

      EDIT2:
      I am currently testing this script that seems to be less spammy and bit more structured, but the script does not terminate correctly and goes on even if it's done. I'm dumb in bash so I don't know how to fix it.
      Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
      If you want to test it, also install python-markdown.
      Files
      /// OMV BUILD IN PROGRESS - MY NAS KILLER /// omv 5.x + omvextrasorg

      i3 8300 - ASRock H370M-ITX/ac - 8GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker), 3x4TB HDD (Data) - Node 304 - Be quiet! Pure Power 11 350W

      The post was edited 5 times, last by thedarkness ().

    • thedarkness wrote:

      Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
      If you want to test it, also install python-markdown.

      If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'

      Today it ran fine, but yesterday it hung.
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380 + Silverstone DS380 DAS Box.
    • gderf wrote:

      thedarkness wrote:

      Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
      If you want to test it, also install python-markdown.
      If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'

      Today it ran fine, but yesterday it hung.
      lol, I did, and I am the other user, the one with another dog in the picture.

      I have found a good script which is not spammy at all, comes from here but these are the changes that I included

      - Adapted for standard parity (original script is made for split parity)
      - Integrated changes of user sburke to make it working with Debian 10 (original script does not work on Debian 10)

      I've tested it in my OMV5 VM and works fine.

      NOTES

      - You can configure sync rules, but by default it always forces a sync
      - It can pause Containers so they don't mess up, and restore them when finished. I disabled it but if you want to use this feature switch MANAGE_SERVICES=0 to 1 and in SERVICES= list all containers.
      - In OMV5 the script it self does send emails (you can specify your address in EMAIL_ADDRESS=). You might want to disable this feature or the one from the scheduled job on OMV.



      The output itself is quite nice. I've kept the whole output, it's not syncing any personal data.

      Brainfuck Source Code

      1. ##[COMPLETED] DIFF + SYNC + SCRUB Jobs (SnapRAID on **REDACTED**)
      2. SnapRAID Script Job started [Thu Jan 9 20:22:00 CET 2020]
      3. ----------------------------------------
      4. ##Preprocessing
      5. Testing that all parity files are present.
      6. All parity files found. Continuing...
      7. ----------------------------------------
      8. ##Processing
      9. ###SnapRAID TOUCH [Thu Jan 9 20:22:00 CET 2020]
      10. Checking for zero sub-second files.
      11. No zero sub-second timestamp files found.
      12. ###SnapRAID DIFF [Thu Jan 9 20:22:00 CET 2020]
      13. Loading state from /srv/dev-disk-by-label-DATI/snapraid.content...
      14. Comparing...
      15. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/config.v2.json
      16. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hosts
      17. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hostname
      18. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/resolv.conf
      19. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83-json.log
      20. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/resolv.conf.hash
      21. update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hostconfig.json
      22. update docker-install/volumes/metadata.db
      23. update docker-install/network/files/local-kv.db
      24. update docker-install/buildkit/cache.db
      25. update docker/portainer/data/portainer.db
      26. update docker/portainer/data/config.json
      27. 24046 equal
      28. 0 added
      29. 0 removed
      30. 12 updated
      31. 0 moved
      32. 0 copied
      33. 0 restored
      34. There are differences!
      35. DIFF finished [Thu Jan 9 20:22:01 CET 2020]
      36. **SUMMARY of changes - Added [0] - Deleted [0] - Moved [0] - Copied [0] - Updated [12]**
      37. There are deleted files. The number of deleted files, (0), is below the threshold of (50). SYNC Authorized.
      38. There are updated files. The number of updated files, (12), is below the threshold of (500). SYNC Authorized.
      39. ###SnapRAID SYNC [Thu Jan 9 20:22:01 CET 2020]
      40. Self test...
      41. Loading state from /srv/dev-disk-by-label-DATI/snapraid.content...
      42. Scanning disk disco-a...
      43. Using 14 MiB of memory for the file-system.
      44. Initializing...
      45. Resizing...
      46. Saving state to /srv/dev-disk-by-label-DATI/snapraid.content...
      47. Saving state to /srv/dev-disk-by-label-PARITY/snapraid.content...
      48. Verifying /srv/dev-disk-by-label-DATI/snapraid.content...
      49. Verifying /srv/dev-disk-by-label-PARITY/snapraid.content...
      50. Syncing...
      51. Using 16 MiB of memory for 32 cached blocks.
      52. disco-a 56% | **********************************
      53. parity 0% |
      54. raid 3% | *
      55. hash 0% |
      56. sched 39% | ***********************
      57. misc 0% |
      58. |_____________________________________________________________
      59. wait time (total, less is better)
      60. SYNC_JOB--Everything OK
      61. Saving state to /srv/dev-disk-by-label-DATI/snapraid.content...
      62. Saving state to /srv/dev-disk-by-label-PARITY/snapraid.content...
      63. Verifying /srv/dev-disk-by-label-DATI/snapraid.content...
      64. Verifying /srv/dev-disk-by-label-PARITY/snapraid.content...
      65. SYNC finished [Thu Jan 9 20:22:04 CET 2020]
      66. ###SnapRAID SCRUB [Thu Jan 9 20:22:04 CET 2020]
      67. Self test...
      68. Loading state from /srv/dev-disk-by-label-DATI/snapraid.content...
      69. Using 13 MiB of memory for the file-system.
      70. Initializing...
      71. Scrubbing...
      72. Using 24 MiB of memory for 32 cached blocks.
      73. SCRUB_JOB--Nothing to do
      74. SCRUB finished [Thu Jan 9 20:22:04 CET 2020]
      75. ----------------------------------------
      76. ##Postprocessing
      77. SnapRAID SMART report:
      78. Temp Power Error FP Size
      79. C OnDays Count TB Serial Device Disk
      80. -----------------------------------------------------------------------
      81. 0 - - SSD 0.0 - /dev/sda disco-a
      82. 0 - - SSD 0.0 - /dev/sdb parity
      83. 0 - - SSD 0.0 - /dev/sdc -
      84. - - - n/a - - /dev/sr0 -
      85. The FP column is the estimated probability (in percentage) that the disk
      86. is going to fail in the next year.
      87. Probability that at least one disk is going to fail in the next year is 0%.
      88. Spinning down disks...
      89. Spindown...
      90. Spundown device '/dev/sdb' for disk 'parity' in 36 ms.
      91. Spundown device '/dev/sda' for disk 'disco-a' in 39 ms.
      92. All jobs ended. [Thu Jan 9 20:22:05 CET 2020]
      93. Email address is set. Sending email report to **REDACTED** [Thu Jan 9 20:22:05 CET 2020]
      94. ----------------------------------------
      95. ##Total time elapsed for SnapRAID: 0hrs 0min 5sec
      Display All
      Files
      /// OMV BUILD IN PROGRESS - MY NAS KILLER /// omv 5.x + omvextrasorg

      i3 8300 - ASRock H370M-ITX/ac - 8GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker), 3x4TB HDD (Data) - Node 304 - Be quiet! Pure Power 11 350W

      The post was edited 5 times, last by thedarkness ().

    • crashtest wrote:

      The only reason I could come up with for using 2 disk SNAPRAID, would be for bit rot protection. Also SNAPRAID, and a simple filesystem like EXT4, will work reasonably well with USB connected drives.

      (Setting bit rot protection aside, which is a big deal in itself, creating a simple mirror with Rsync provides roughly the same benefits.)
      I have set up just this kind of backup strategy:
      • one mirrored backup using Rsync.
      • two disks set up in SnapRAID, one of them data, and the other parity. Both of them content.
      It seemed logical to me, as I can easily fit what I am currently doing on one 8TB disk.
      Everything worked well. I have an automatic scheduled Rsync twice a week on the mirrored backup, and I have run SnapRAID sync once, and everything went well, except for two details (maybe three) so I am looking for some advice/clarification before I proceed into the unknown.
      1. My SMART notified me that my Rsync mirror had bad sectors. In the process of unreferencing/unmounting the disk for replacement I discovered that...
      2. I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.
      3. I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.
      My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed. Thanks.
      OMV 5 (current) - NanoPi M4: Nextcloud, Plex, & Heimdall - Acer Aspire T180: backup - Odroid XU4: Pi-Hole (DietPi) - Odroid HC2, Raspberry Pi 3B+, and HP dx2400: testing.
    • I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.

      Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.
    • Agricola wrote:

      I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.
      You have me here - I'm not sure I understand what happened. Are files missing from your Rsync destination? (I suppose you want those files?)

      Agricola wrote:

      I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.
      On your Rsync disks, I'm not sure if you're talking about the source disk or the destination. If your source disk is still good, you're good to go. So, in speculation, was it your source disk that started to fail?

      It seems as if you're asking about restoring the source disk with "SNAPRAID". I haven't done a full disk restore before. I haven't had to. With multiple backups, after I do significant work, I manually run a backup to insure that new data is in at least two places. (If I forget, the automated processes take care if ti.)

      Agricola wrote:

      My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed.
      There's a recovery process in the SNAPRAID MANUAL, in section 4.4. The questions above, would only make me ask you more questions.
      - If you have at least one good backup, why would you want to do a SNAPRAID restore?
      - Restored data would only be as current as the last SYNC operation. ((You would know, better than I, if there's a compelling reason to go back to that state of your date.)) It would seem as if you'd be losing work in any case.

      If I was in your place:
      If I had at least one good clean backup, on another platform, and if there's a compelling reason, I might try a SNAPRAID restore. In any case, my inclination would be to take the lowest risk safest path possible. That means I wouldn't do anything with or try to reuse / fix the disk with bad sectors before the replacement disk arrives.

      When you have your replacement disk and it's restored with your backup, even if the backup is a bit out of date, you could mount the failing disk and see what missing files (new work) can rescued from it using something like Midnight commander.
    • I’m sorry. I guess I tried to cram too much in one post. Edit: Looking at what’s below, I’m afraid I’m about to do it again.

      I’m using three SATA disks but only two are in SnapRAID: one Data and the other Parity, both Content. Both of those are physically okay and SnapRAID appeared to run fine the two times I ran sync and scrub about a week ago. The third disk is a mirrored backup of the Data drive just mentioned, via Rsync, per your guide on p. 64. It is not part of the SnapRAID array.

      The third disk, about a week ago showed up with bad sectors so today I am going to swap it out for a new disk that just arrived and Rysync it from the first drive. All is well up to this point. No data is really in jeopardy. I even have a remote backup on another machine.

      When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.

      What I am wondering about is how to start sync, scrub, etc. and what to expect from error output and what to do with it. I have corrected the exclusion rule (AppData) to point to the Data disk.

      Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.

      Despite tons of reading up, both at SnapRAID and on this forum, I’m not sure of the steps to begin again sync, scrub, fix, etc.
      OMV 5 (current) - NanoPi M4: Nextcloud, Plex, & Heimdall - Acer Aspire T180: backup - Odroid XU4: Pi-Hole (DietPi) - Odroid HC2, Raspberry Pi 3B+, and HP dx2400: testing.
    • Agricola wrote:

      When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.
      The SNAPRAID rule is probably why the RSYNC disk wouldn't unmount. Clear the rule and it should unmount. If not look in filesystems to see if the disk is "referenced".

      Agricola wrote:

      Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.
      You're still good, if your SNAPRAID data disk is OK. The SYNC updates the parity drive and the content file(s) only, and it becomes (in a sense) your new backup as the completion of the SYNC operation.

      If you're not doing a SYNC manually, on a regular basis, you might think about automating a SYNC command to run once or twice a month. It can be done in scheduled tasks with something like the follow command: snapraid touch; snapraid sync -l snapsync.log
      But note there are other housekeeping commands you should consider running, before running the next SYNC command.

      The order I run is:
      snapraid touch; snapraid sync -l snapsync.log

      snapraid -p 100 -o 13 scrub
      snapraid -e fix

      The first command does a touch that fixes the annoying "0" second time error thing. It then runs a sync and directs output to a log named snapsync.log

      A few days before the next SYNC operation, I run the scrub.
      Then the fix command is run one day after the scrub, to fix issues (bit-rot, etc.) found in the scrub.
      In Scheduled Tasks, all are set up all to send an E-mail of their output.
      (*And note that, even with the above, you can manually run a new SYNC operation, after doing a lot of work that you want to insure is backed up.*)

      That's how I do it. Others may have other ideas.

      The post was edited 1 time, last by crashtest ().

    • jollyrogr wrote:

      Why bother using parity when you only have 1 data disk? Do you plan to add more disks in the future?
      The SnapRAID is to protect against data corruption and the Rsync is for a true backup. I have 8TB disks and they are running at about 23% capacity. I will probably add disks some day.
      OMV 5 (current) - NanoPi M4: Nextcloud, Plex, & Heimdall - Acer Aspire T180: backup - Odroid XU4: Pi-Hole (DietPi) - Odroid HC2, Raspberry Pi 3B+, and HP dx2400: testing.
    • jollyrogr wrote:

      I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.

      Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.
      Thanks but don't worry, I found a better script in the meantime, you can get it in my previous posts. Works quite well.
      /// OMV BUILD IN PROGRESS - MY NAS KILLER /// omv 5.x + omvextrasorg

      i3 8300 - ASRock H370M-ITX/ac - 8GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker), 3x4TB HDD (Data) - Node 304 - Be quiet! Pure Power 11 350W
    • @crashtest Thanks for the information, especially the order and explanation of the commands.

      Regarding not being able to unmount the failed destination drive, I knew it was a reference issue but I “knew” nothing was referenced to it. Finally I picked up a stray comment somewhere about SnapRAID rules. Sure enough I had set to exclude the AppData folder on the SnapRAID Data disk but had mistakenly written the Rsync Destination disk into the rule. The first sync I did showed a bunch of AppData lines in the output. I should have suspected something from the git-go, but I didn’t have the experience to know what it meant.

      @jollyrogr At post 552 of this thread @crashtest stated a two-disk SnapRAID was possible, so I tried it. Here was my thinking:
      1. I want bit rot protection.
      2. I only have three 8TB disks.
      3. I’m not currently running close on space plus I didn’t want (or need) UnionFS to complicate the process.
      4. I don’t want to forfeit my full-disk mirror via Rsync.
      OMV 5 (current) - NanoPi M4: Nextcloud, Plex, & Heimdall - Acer Aspire T180: backup - Odroid XU4: Pi-Hole (DietPi) - Odroid HC2, Raspberry Pi 3B+, and HP dx2400: testing.