Posts by Forssux



    I didn't want to make you feel bad..I'm a beginner here and of course my post make more sense to me than anybody else.

    When I read your post indeed this is what I was meaning to say. For clarification we can rename the long UUID to whatever we want.
    some will call it c,d,e etc others will use sda,sdb,sdc or Disk1,Disk2 or 1,2, you get my drift.
    Strange that zfs disks arent recognised has having data and can be easily wiped. I assume you'll have the zfs plugin.

    Kind regards,
    Guy Forssman

    Thanks for answering...

    I have run this command as guyf and root still no luck.
    I have now changed some settings in the gui.
    First only the user was coorectly filled in but the group had a 1002 number.
    Then I unvinked chroot and everything was fine.
    With mc I changed a file picture.jpg owner and the group and made the file 555 not writable by owner/group/other

    Still after rsync -av --progress /mnt/QData/Test-Truenas-Main root@192.168.1.7::guyf

    The owner/group of picture.jpg was the same as the original file and also the file had a 0775 permission setting just like the original.

    Thanks for helping



    OMV is using the non-predictable device files in the UI because they are shorter and more meaningful for the normal user. I don't think you'll be happy when there is something like this /dev/disk/by-id/scsi-SATA_ST3200XXXX2AS_5XWXXXR6 or /dev/disk/by-uuid/. Under the hood ONLY predictable device files are used.

    That is indeed almost unreadable.
    And indeed these device name can change on reboot.
    But OMV6 doesn't need to use the real /dev/sda1 in the file system on the Gui.
    Its feasible that OMV6 gives for understanding a sda name to a ad3ee177-777c-4ad3-8353-9562f85c0895 disk.
    And when this is only a mapping the user can rename the sda to whatever he wants.
    Internally OMV6 is using the UUID so that never changes.

    Hi there,

    So I would want the following system.

    SystemFile Systemsync
    Truenas core 12.0u7zfslife
    OpenMediaVault 6ext4 Mergerfsevery week
    2 large Hard drives zfsevery 2 weeks




    SystemUsersGroups
    Truenas Core 12.0u7qnap,Eveline,guyf,ElizabethTruenas-Main,Eveline,Guy,Elizabeth
    OMV6guyfTruennas-Main


    Dataset/DirectoryOwnerAcces Groups RWAcces Users RWAcces Groups RAcces Users R
    pool/QDocumentenTruenas-MainTruenas-MainTruenas-MainTruenas-Mainevery member of group
    pool/QGameElizabethElizabethElizabethTruenas-mainevery member of group
    pool/QFotoguyfguyfguyfTruenas-Mainevery member of group
    pool/QFoto-not-publishedguyfguyfguyfguyf,Evelineguyf,Eveline
    pool/guyfguyfguyfguyfguyfguyf



    When I do my rsync I would like dat omv6 becomes a copy of Truenas.
    It needs to inherrit and change permissions, timestamps and of course delete extraneous files from destination dirs.


    I tried via the gui but got this...



    Then I went to putty and saw very strange behavior






    These are my settings for the rsync module.
    I changed user to guyf and this didn't help either.




    What am I doing wrong?

    I know this is rsync between different kinds of NAS but that's the situation at home.



    Kind regards.
    Guy

    Thanks all for the very valuable input.

    Because I'm on OMV6 and experimented a lot I pulled all drives and did a fresh install.

    Now of course OMV6 had no choice than to follow the order I was putting in the drives.
    Not something you want to do everytime indeed.

    It's true these drive names can change on reboot but since OMV6 in the gui itself is still using and that's what showed up by default in filesystems. I really wanted the names correct so I knew what's on it.

    And then I found thanks to raulfg3 a little window on the upper right side and this is the result.
    Thank You all....

    Hi There,

    I manually altered and prepared my HardDisks. I even wrote /dev/sda, /dev/sdb etc on the labeled hard drives.


    Then I put every HD in the machine with a gap of 10 seconds.
    The result is this...
    How can I change this labeling done by OMV6 and let it correspond with my preparations?

    The reason I ask is because I know what is on /dev/sda or /dev/sdg etc.

    Even though sda and sdf where last to inserted in the PC they aren't last in the list.

    Kind regards,
    Guy Forssman


    Hi Thanks for the input.


    Old server is Dell R520 on TrueNAS-core 12.0-u7

    New server is HP ML310 G5 with OMV 6


    Yes I can see the pool with name pool from other clients

    So I see this in putty..


    First it's complaining about no space left and then it's continuing anyway with other files.
    I checked and indeed the files don't exist when rsync complains about space and exist when not complaining.

    Hi,

    I'm trying several days

    To find out this you should consult this thread. This plugin has undergone a complete rewrite in its migration process to OMV6, and several changes so far.

    omv-extras plugins - porting progress to OMV 6.x

    Looking here I see that the MergerFS plugin is ported, but I can't find the meaning of the fstab option.

    I changed the policy ..



    Rsync on the other machine...
    rsync -av --progress /mnt/QData/ guyf@192.168.1.247::pool



    The disk shouldn't be filled but when I look in the File System it fills the disk until it's full.

    There should be roughly 70GB be free disk.



    I created a empty directory structure with the same user/group owner as the full disks.
    Still I wont jump over to sdc1 or sdd1


    What am I doing wrong?



    Kind regards,
    Guy Forssman

    Hi chente


    Thanks for the quick answer. Indeed spreading the files can be option for many pepople.
    However in the event of a catastrophic failure where only the good disks remain. The files are scattered and thus hard to recover.

    This is the policy that was in place.



    What is the purpose of the fstab on this page?


    rsync on the other machine stopped when trying to move a file of 50GB

    rsync: [receiver] write failed on "urbackup/DAPHendrickxUp/211230-1106_Image_C/Image_C_211230-1106.vhd" (in pool): No space left on device (28)

    As my policy is to use until 4G ram left I expected it to be copied to the next drive.

    I have read in a older thread about this problem where crashtest explains what happens.
    Is there a way to avoid this in the beginning. Where mergerfs looks at the incoming file see that's it more than 4GB and therefore put it on the second drive?

    As I read numerous times here that in OMV one is supposed to use the gui, how do I correct this full HD problem once it occurred.
    Shall I create a shared folder for each drive and then move files from Drive1 to Drive 2? I know that I can use mc or even file explorer but that is not advised.

    Kind regards,
    Guy

    You can see here the different policies that you can configure in mergerfs.

    https://github.com/trapexit/mergerfs#policies

    Thanks for this link, it will clarify a lot for some users. However for me it's just making it harder to understand.

    I just want to know which policy to choose to fill one disk after another but not trying to overfill it.

    I guess some want their files scattered around so a policy for that would be great to.


    I understand that anything with ep in its name is path preserving the rest I'm a little lost.
    Some examples would clarify a lot to me
    Kind regards,
    Guy

    Hi there,



    I'm trying to put together the best practices when coming new to OMV6 from other NAS programs.
    In short I will synchronize data from another machine to this OMV6
    I hopethe experienced users will led their light shine upon this and help other beginners like me .


    I have 6 drives in total, 2 of them will serve as parity drives

    • install plugin openmediavault-omvextrasorg 6.0.5
    • install plugin openmediavault-mergerfs 6.0.14
    • install plugin openmediavault-snapraid 6.0.3
    • install plugin openmediavault-sharerootfs 6.0-2
    • storage/disks wipe all drives
    • Storage/filesystem create filesystem like /dev/sda1 --> /dev/sdf1
    • Storage/MergerFS create name for this pool and choose the disks that span this pool /dev/sda--/dev/sdd
    • Storage/Shared Folders Make a shared folder and use the name of the pool as filesystem
    • enable rsync
    • Services/Rsync/Server/Module uses name of spanned pool create a name for rsync module
    • Services/Snapraid/Drives create per found drive a name like /dev/sda --> disk1 ,/dev/sd2 --> disk2, /dev/sde --> parity /dev/sdf --parity1


    What to be done when disk is filled upon the point where rsync fails. How to redistribute the data.
    There some other important settings which I don't fully understand.

    Which policy do I best use in following situation:
    Original pool has 12TB data and needs to get into OMV6-pool I want to keep directory structure the original.
    I want to fill each drive till 90%

    What is the purpose of the fstab option?


    So what did I do wrong, or can be done better..


    Kind regards and a happy new year to you all..
    Guy Forssman

    You could always look for a older "real" server like a DELL or Supermicro.
    I bought mine a R520 with 80G ram and dual e5 2470 this summer for a mere 400€. I has 4 1GB nic to, so for keeping your data safe and playing with vm's look foor older Servers. QNAP/Synalogy are just to expensive and lack the power.
    My situation is TrueNAS on R520 OMV6 on HP ML310 G5 and 2 separate disks take away in case of a fire in our house.

    I started out using UnionFS and SnapRaid on OMV. Over time I realized that maintaining this system requires some dedication and is not very intuitive. In case of problems it does not seem like a simple system.

    Researching ZFS I found that it is very easy to use and practically maintenance free. It has some disadvantages, it is less flexible with the initial configuration, disks of the same size, inability to remove or add disks to the pool. Obviating this, it seems easier to maintain. I don't really have to do anything. And also, if a disk breaks, I don't lose the data I'm working with at the moment, Snapraid is not instantaneous. Snapraid doesn't like databases, with ZFS I don't have that problem. ZFS allows me to compress the entire filesystem if I am interested in doing so.

    I like the simplicity so I reconfigured my drives, had 4 4TB drives, bought another one and mounted a 5x4TB RaidZ1 which I hope will last me a long time. I set it up with the OMV5 plugin, it was very straightforward. Although from CLI it should be very simple too. I trust this plugin will be ported to OMV6 with the same functionality. The rest of the disks I use as backup on another server with MergerFS, without SnapRaid.

    The MergerFS plugin has now the same functionality as the UnionFs plugin, otherwise I wouldn't have been able to deploy both on a the OMV6 system I have running now as a backup of the Truenas.


    Of course zfs can be recoverd on other machines, the question if my wife can and will do it remains to be seen.

    The overhead of ZFS is that a raid z1 from 8*4TB leaves you with less space to put your files on than a raid5 from the same drive.
    And if you take the 80% fill rule into account it gets even way worse.


    Also imagine you have TrueNAS with Z1 and a OMV6 with MergerFS and Snapraid and one parity disk and 2 loose disks (every month synced) a disaster occurs and you loose 2 disks on the TrueNAS. everything is gone then and you need to recover from backup. This will take a long time for the whole pool.

    If the same would occur to the OMV6 rebuild time would be shorter.


    Both have their merits. I hope that a wizard will get eventually into OMV? that upon first start just asks some questions, prepares the disksand , make the filesystem.


    Have a fine New Year.
    Guy

    @chente I stand corrected, there's one thing that stil remains and a new one.


    ZFS has more overhead surely compared to regular raid. With the same Hard drives I had little remaining data to use than on my QNAP.


    The wife factor.. In case of a dead of the home administrator zfs is far more unknow than even ext4 en can give problems for recuperating data for the remaining partner.

    I would indeed stick to OMV6 as that can work with the oldest computer.
    I have TrueNAS running on a second hand DELL R520 with 8 drives and 80G of ECC RAM and 8*4TB of memory.


    ZFS needs roughly 1GB ram per TB Hard disk. ZFS uses a lot more diskspace for itself. But ZFS has snapshots and selfhealing.

    For older computer I would suggest OMV6 with MergerFS and Snapraid.
    First put Filesystem on the disks and then Make a pool/volume in mergerfs and then snapraid on the individual drives,