MergerFS + Snapraid: balance data after adding a new disk

    • OMV 3.x
    • Resolved
    • MergerFS + Snapraid: balance data after adding a new disk

      Hello,

      I might be in need of some help here!

      I just added a new 4Tb disk to my current pool.

      Until then I had :
      - 1x 4Tb parity disk
      - 2x 4Tb data disks
      - 1x SSD for the OS

      Using MergerFS & Snapraid, I'd like to use the mergerfs.balance tool to balance the data across the disks, as the other two disks are pretty much full.


      Problem is: I'm a bit confused on how to proceed.


      According to this thread, one way to install it would be through the command: wget raw.githubusercontent.com/trap…4c87/src/mergerfs.balance

      The things is, I get an "error 400: bad request"... Anyway, I'm stuck at this point.


      Thanks in advance!

      The post was edited 1 time, last by yayaya ().

    • The download via wget worked for me. Double check that you have the URL correct.

      Source Code

      1. https://github.com/trapexit/mergerfs-tools/blob/8b507e5e392cb1a9a76b2958eb7ca87ed6dd4c87/src/mergerfs.balance
      2. or
      3. https://raw.githubusercontent.com/trapexit/mergerfs-tools/8b507e5e392cb1a9a76b2958eb7ca87ed6dd4c87/src/mergerfs.balance
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • Thanks, that did the trick!

      As expected, after using the mergerfs.balance tool all the disks have pretty much the same amount of data.

      Except for the parity one. It's almost full... Any idea why ? Data disks have 1/3 of empty space.

      I did a sync in snapraid but that didn't help.
    • yayaya wrote:

      Except for the parity one. It's almost full... Any idea why ?
      This is why snapraid tells you to use the largest disk in your pool for the parity disk. Your parity disk should not be in your mergerfs pool. If it is, mergerfs will put data on it and snapraid will put the parity file on it. Hence why it is fuller than the other disks.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Thanks. Here is what I get :

      :~# ls -al
      total 32
      drwx------ 3 root root 4096 janv. 24 2018 .
      drwxr-xr-x 26 root root 4096 août 29 14:44 ..
      -rw------- 1 root root 1145 août 31 19:21 .bash_history
      -rw-r--r-- 1 root root 570 janv. 31 2010 .bashrc
      -rw------- 1 root root 0 janv. 24 2018 dead.letter
      -rw-r--r-- 1 root root 268 janv. 10 2018 .inputrc
      -rw------- 1 root root 26 août 29 16:54 .nano_history
      -rw-r--r-- 1 root root 140 nov. 19 2007 .profile
      drwx------ 2 root root 4096 janv. 10 2018 .ssh
    • Ok, did a sync, nothing changed...

      Display Spoiler

      $ sudo snapraid sync
      Self test...
      Loading state from /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
      Scanning disk SnapRaid1...
      Scanning disk SnapRaid2...
      Scanning disk SnapRaid3...
      Using 917 MiB of memory for the FileSystem.
      Initializing...
      Resizing...
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
      Syncing...
      Using 32 MiB of memory for 32 blocks of IO cache.
      100% completed, 930 MB accessed in 0:00


      SnapRaid1 3% | *
      SnapRaid2 37% | *********************
      SnapRaid3 24% | **************
      parity 0% |
      raid 2% | *
      hash 2% | *
      sched 32% | ******************
      misc 0% |
      |___________________________________________________________
      wait time (total, less is better)


      Everything OK
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
      Saving state to /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk1/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk2/snapraid.content...
      Verifying /srv/dev-disk-by-label-SnapRaidDisk3/snapraid.content...
    • As before, do an ls -al to list the files on the parity drive. Then unmount the drive and run the ls -al again. Compare the two results. Don't forget to remount the drive.

      It's possible you have one or more other non-snapRaid files hiding beneath the mount. If this is not the case, you may have to delete the parity file and sync again.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • You would have to remove the parity drive in the snapraid plugin first, then unmount it, then add it back into snapraid. Personally, I wouldn't bother doing it that way, I would unmount it in the shell, do the ls -al, then mount it again in the shell and then verify that the OMV GUI shows it as mounted.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • First off, thanks gderf for trying to point me in the right direction!


      I think this time I found the solution (see attached).

      In short, if you are in the same situation, this command seems to be the one: Snapraid sync -R


      So basically I saw no change thanks to the 'ls -al' command after unmounting & mounting back the parity drive.

      I decided to wipe the disk and rebuilt the parity. This solution didn't fix anything, Snapraid immediatly allocated the exact same amount of space for this drive when using the sync command.

      Then, once rebuilt, I did a test: a simple transfer of data on the NAS. While the data disks available space decreased as expected, the parity one didn't change at all.

      In the end, I found a topic where someone made the following statement to another fellow who had the same issue: 'You had more data on the data disk in the past (in which case the parity space will be reused when you later add new data files)'


      The solution to get back to normal: Snapraid sync -R

      Snapraid immediatly decreased the used space on my parity drive, and it is now syncing.

      I still have to wait 5 hours or so before making a final check, but things look promising!

      ---
      About the Snapraid sync -R command, according to the manual:
      '
      -R, --force-realloc


      In "sync" forces a full reallocation of files and rebuild of the parity.
      This option can be used to completely reallocate all the files removing
      the fragmentation, but reusing the hashes present in the content file
      to validate data. Compared to -F, --force-full, this option reallocates
      all the parity not having data protection during the operation. This
      option can be used only with "sync".
      '
      ---
      Images
      • backToNormal.PNG

        20.86 kB, 694×301, viewed 60 times

      The post was edited 1 time, last by yayaya ().