HOWTO setup OMV6 for fresh Hard Drives to receive rsync from another machine

  • Hi there,



    I'm trying to put together the best practices when coming new to OMV6 from other NAS programs.
    In short I will synchronize data from another machine to this OMV6
    I hopethe experienced users will led their light shine upon this and help other beginners like me .


    I have 6 drives in total, 2 of them will serve as parity drives

    • install plugin openmediavault-omvextrasorg 6.0.5
    • install plugin openmediavault-mergerfs 6.0.14
    • install plugin openmediavault-snapraid 6.0.3
    • install plugin openmediavault-sharerootfs 6.0-2
    • storage/disks wipe all drives
    • Storage/filesystem create filesystem like /dev/sda1 --> /dev/sdf1
    • Storage/MergerFS create name for this pool and choose the disks that span this pool /dev/sda--/dev/sdd
    • Storage/Shared Folders Make a shared folder and use the name of the pool as filesystem
    • enable rsync
    • Services/Rsync/Server/Module uses name of spanned pool create a name for rsync module
    • Services/Snapraid/Drives create per found drive a name like /dev/sda --> disk1 ,/dev/sd2 --> disk2, /dev/sde --> parity /dev/sdf --parity1


    What to be done when disk is filled upon the point where rsync fails. How to redistribute the data.
    There some other important settings which I don't fully understand.

    Which policy do I best use in following situation:
    Original pool has 12TB data and needs to get into OMV6-pool I want to keep directory structure the original.
    I want to fill each drive till 90%

    What is the purpose of the fstab option?


    So what did I do wrong, or can be done better..


    Kind regards and a happy new year to you all..
    Guy Forssman

    • Offizieller Beitrag

    What to be done when disk is filled upon the point where rsync fails. How to redistribute the data.
    There some other important settings which I don't fully understand.

    Which policy do I best use in following situation:
    Original pool has 12TB data and needs to get into OMV6-pool I want to keep directory structure the original.
    I want to fill each drive till 90%

    You can see here the different policies that you can configure in mergerfs.

    https://github.com/trapexit/mergerfs#policies

  • You can see here the different policies that you can configure in mergerfs.

    https://github.com/trapexit/mergerfs#policies

    Thanks for this link, it will clarify a lot for some users. However for me it's just making it harder to understand.

    I just want to know which policy to choose to fill one disk after another but not trying to overfill it.

    I guess some want their files scattered around so a policy for that would be great to.


    I understand that anything with ep in its name is path preserving the rest I'm a little lost.
    Some examples would clarify a lot to me
    Kind regards,
    Guy

    • Offizieller Beitrag

    If you want to fill one disk after another, I suppose the eplfs policy will be the one you need. If this hasn't changed since I last used it, when the first disk is full you will have to manually create the folder on the next disk.

    Personally I think it is easier to let the data be spread evenly on the disks. In the event of an eventuality rsync is very useful to manage the data even if it is distributed.

    The disk usage limit is programmed independently of the policy, you can set the limit you want.


    Edit: I correct, the most suitable policy I think would be epmfs. This will continue writing to the second disk when the first is full and the folder on the second disk has been created.

  • Hi chente


    Thanks for the quick answer. Indeed spreading the files can be option for many pepople.
    However in the event of a catastrophic failure where only the good disks remain. The files are scattered and thus hard to recover.

    This is the policy that was in place.



    What is the purpose of the fstab on this page?


    rsync on the other machine stopped when trying to move a file of 50GB

    rsync: [receiver] write failed on "urbackup/DAPHendrickxUp/211230-1106_Image_C/Image_C_211230-1106.vhd" (in pool): No space left on device (28)

    As my policy is to use until 4G ram left I expected it to be copied to the next drive.

    I have read in a older thread about this problem where crashtest explains what happens.
    Is there a way to avoid this in the beginning. Where mergerfs looks at the incoming file see that's it more than 4GB and therefore put it on the second drive?

    As I read numerous times here that in OMV one is supposed to use the gui, how do I correct this full HD problem once it occurred.
    Shall I create a shared folder for each drive and then move files from Drive1 to Drive 2? I know that I can use mc or even file explorer but that is not advised.

    Kind regards,
    Guy

  • Copying a file just streams bytes on after another, so the receiver has no clue of how many bytes are to follow.

    So you can't avoid that situation when copying lage files. That's the tradeoff for this storage policy.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    However in the event of a catastrophic failure where only the good disks remain. The files are scattered and thus hard to recover.

    Like I said before, rsync is very useful in this case. If you irretrievably lose a disk for any reason, you can continue to use your other data. To restore from a backup, for example, rsync will only copy the missing data. It does not matter which disk the data is on or whether it is scattered.

    What is the purpose of the fstab on this page?

    To find out this you should consult this thread. This plugin has undergone a complete rewrite in its migration process to OMV6, and several changes so far.

    omv-extras plugins - porting progress to OMV 6.x

    As I read numerous times here that in OMV one is supposed to use the gui, how do I correct this full HD problem once it occurred.
    Shall I create a shared folder for each drive and then move files from Drive1 to Drive 2? I know that I can use mc or even file explorer but that is not advised.

    In general, all operations that can be done from the GUI must be done from the GUI. But it may happen that to do some things it is necessary to go to the CLI. This could be one of those cases. When one disk is full you will have to create the folder on the second disk manually, from the CLI. Then mergerfs will write to that folder.

  • Hi,

    I'm trying several days

    To find out this you should consult this thread. This plugin has undergone a complete rewrite in its migration process to OMV6, and several changes so far.

    omv-extras plugins - porting progress to OMV 6.x

    Looking here I see that the MergerFS plugin is ported, but I can't find the meaning of the fstab option.

    I changed the policy ..



    Rsync on the other machine...
    rsync -av --progress /mnt/QData/ guyf@192.168.1.247::pool



    The disk shouldn't be filled but when I look in the File System it fills the disk until it's full.

    There should be roughly 70GB be free disk.



    I created a empty directory structure with the same user/group owner as the full disks.
    Still I wont jump over to sdc1 or sdd1


    What am I doing wrong?



    Kind regards,
    Guy Forssman

    • Offizieller Beitrag

    I am a bit lost with this thread. Trying to gather information, please correct what is not correct.

    - You have an old server with information that you want to copy to a new server.

    - The new server is configured with: OMV6. 6 hard drives configured as 4 data disks (joined with mergerfs) and 2 parity disks with Snapraid.

    - You have created an rsync module in the shared folder that joins the 4 data disks with mergerfs.

    Questions:

    What operating system does the old server have?

    On the new server, is the data shared folder configured correctly? Can you see it from a client?


    Forget the fstab option. This is for setting the mounts directly in fstab, you probably don't need it I think. With the latest changes to this plugin it shouldn't be necessary.

    Forget Snapraid for the moment. You can configure it later if you want, they are only parity disks. This is completely independent of mergerfs.

    Maybe you are trying to do too many things at once.

  • Hi Thanks for the input.


    Old server is Dell R520 on TrueNAS-core 12.0-u7

    New server is HP ML310 G5 with OMV 6


    Yes I can see the pool with name pool from other clients

    So I see this in putty..


    First it's complaining about no space left and then it's continuing anyway with other files.
    I checked and indeed the files don't exist when rsync complains about space and exist when not complaining.

  • Can you try to set the minimum free space larger than the large files you have.


    I guess, you have lets say 72GB free space when you copy a file larger than that. So it starts to fill the drive until no more space is left. Hence the error messsge.


    If it is only for single use, you can try to fill the drive manually so that less than 70GB are left and it should copy to the next drive.


    These are pure assumptions from what i mean how the mergerfs works.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

    • Offizieller Beitrag

    First it's complaining about no space left and then it's continuing anyway with other files.
    I checked and indeed the files don't exist when rsync complains about space and exist when not complaining.

    This is strange, I don't know what to think. Maybe someone else has an idea.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!