Unionfilesystem Plugin

  • Ofc that's the problem.. When I wrote the mkconf I thought that rw is the default for every branch, but it turns out it's only default for the first one.

    Zitat

    Readable and writable branch. Set as default for the first branch. If the branch filesystem is mounted as readonly, you cannot set it ’rw.’


    Zitat

    Readonly branch and it has no whiteouts on it. Set as
    default for all branches except the first one. Aufs never
    issue both of write operation and lookup operation for
    whiteout to this branch.


    @ryecoaaron could you try the latest commit and package it if it works? :)

    Einmal editiert, zuletzt von HK-47 ()

    • Offizieller Beitrag

    Manually adding rw to fstab for each branch did fix the problem. So, I'm sure the latest commit will work. Will test anyway.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    @HK-47, the change doesn't append the =rw on the last branch.


    Fixed on github. Trying changes.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    2 Mal editiert, zuletzt von ryecoaaron ()

    • Offizieller Beitrag

    1.1.1 in the repo and working on my system. Try it out and let us know.


    Reboot after making a new pool :)

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Awesome! Everything works as it should. Thank you very much for the great and fast support. :thumbup:


    Should I mark the thread as solved? If so, I will edit the title a little bit to make it more specific. Or everybody can use this thread as a general thread about the unionfs plugin and their personal issues. Don't know what's the right procedure. :)

  • Have tried to setup the quota for each drive but on the last one I got this error message:


    Zitat

    Failed to execute command 'export LANG=C; omv-mkconf quota 2>&1': quotacheck: Scanning /dev/sdf1 [/media/113193d2-253b-4b2c-b1d0-94b129fcbddb] quotacheck: Checked 4 directories and 4 files done quotacheck: Cannot remount filesystem mounted on /media/b82f286d-1b5e-49ee-96eb-f4941a002424 read-only so counted values might not be right. Please stop all programs writing to filesystem or use -m flag to force checking.



    Does this relates to your statement @ryecoaaron?

    the change doesn't append the =rw on the last branch

    • Offizieller Beitrag

    Sorry, thought I was changing aufs entries only and forgot to test. Works on my system.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    No, the =rw has nothing to do with quotas. That was just necessary to get aufs to write to the drive with the most free space.


    I guess quotas needs the filesystem remounted but it can't because aufs is using it.


    It really sucks when you have aufs working and it stops working. My backup added 300 gb worth of duplicate files to my pool. Running dupfiles on it but it is slow with 6tb of files :(

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello there,


    i don't know if this thread is still active but i'm missing one thing in the AUFS plugin: ACLs


    I can only control who can access/write/read a shared folder on a AUFS pool when creating the shared folder (Administrator/User Group settings during creating). After that, i normally change permissions through the ACL button (for example i want to have a share which is only readable for one user but not for some other users). Since the ACL button is greyed out i have no option to modify these settings.


    Will this feature be added later or is there a workaround?


    MHDDFS is no option because of poor performance so i just wanted to go the AUFs way...


    Best regards

  • Strange, for some reason this does not work for me. If i share a folder the default settings are admin r/w and users r/w. All new created users are in the users group. If i create a shared folder with admin r/w group/other no access and add r/w under privileges for a user named test, he cannot access the share or write files to it. Am i doing something wrong?

  • So the privileges are the samba authorization okay. But how can i share a folder for just one user if i have 3 of them? If i create a shared folder with default settings the usergroup users has readwrite access and every user is inside the users group. What i want to have is


    shared folder "folder1" r/w access "user2"
    shared folder "folder2" r/w access "user2"
    shared folder "shared" r/w access "user1" and "user2" r/o access "user3"


    for example :)


    Thank you very much for your time.

  • [/quote]Just tested some things and it looks fine but one problem remains: User1 and User2 have r/w to dir1. User1 created a folder and User2 should be able to delete it (or a file). This does currently not work. How do i archive that?[/quote]


    Just got it, think it should be the button "inherit permissions" in the SMB/CIFS section. I'll test that after the migration copy job is done, some TB left :)


    Just wanna say: OpenMediaVault is awesome! You guys did a great job!

  • HEY,
    I just realized today that the AUFS and MHDDFS plug-ins were updated to ONE single combined plug-in
    I went to update as my current MHDDFS keeps having issues and gets disconnected or something and I have to reboot the server to fix the glitch (drives don't show properly under filesystems either)
    anyways... I went to update to the new plug-in and tried to do mhddfs but realized it is different which is no big deal... but I did see that it only branches entire drives instead of folders as previously done....
    if this is the case I will need to pull all my data off and resetup everything as it is a pain to move the data from my HD1, HD2, ect folders that I was using as branches...
    if that is the case let me know and I will do that.
    My problem with this is... if the entire drive is used as a branch won't the snapraid content, ect files be duplicated on the POOL? don't know if this will cause problems...
    maybe I'm doing it wrong or maybe it's a glitch I found but I can only get it to branch entire drives... not folders... and I found that instead of having a Bind share it creates a new "drive" virtually...
    anyways.. maybe we can get a little documentation on this new plug-in as I may be screwing everything up and am too big of a noob to fix it manually via terminal.
    Thanks for all that you do once again.

  • One last question....
    I was considering jumping back to AUFS from MHDDFS as I was wondering if the glitches I originally was having with AUFS would be solved now that we are on wheezy and that the plug-in is further developed...
    The glitches I was having involved duplicate files, permission issues (differences) between branches and the pool... and weird issues doing a test snapraid recovery...
    Is now the time to try AUFS? or should I stick with MHDDFS?... Stabilty is #1 but an increase in speed (away from FUSE) would be nice...

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!