Posts by uga

    Ok I found the issue, Or at least half of it.


    The problem was that I was trying to delete a folder referring to another shared folder.

    In the Shared Folders gui, both folders were marked as not referenced.

    Simply put: deleting folder B before folder A worked ok, and allowed me to delete folder A.


    Now I can also create more new folders, which was prevented before - this part I did not understand the reason for: must be somehow related to the previous situation but...?

    Hello


    when I try to delete an unreferenced shared folder (the Referenced colums says No, and in facts they are not used by rsync, smb or whatever) I get this error


    Can someone help pls? Thx in advance!




    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl restart 'sharedfolders-BU\x2dVarie.mount' 2>&1' with exit code '1': Assertion failed on job for sharedfolders-BU\x2dVarie.mount.

    Errore #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl restart 'sharedfolders-BU\x2dVarie.mount' 2>&1' with exit code '1': Assertion failed on job for sharedfolders-BU\x2dVarie.mount. in /usr/share/php/openmediavault/system/process.inc:182
    Stack trace:
    #0 /usr/share/php/openmediavault/system/systemctl.inc(86): OMV\System\Process->execute(Array, 1)
    #1 /usr/share/php/openmediavault/system/systemctl.inc(160): OMV\System\SystemCtl->exec('restart', NULL, false)
    #2 /usr/share/openmediavault/engined/module/sharedfolders.inc(66): OMV\System\SystemCtl->restart()
    #3 /usr/share/openmediavault/engined/rpc/config.inc(194): OMVModuleSharedfolders->startService()
    #4 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)
    #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
    #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(565): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusPO...', '/tmp/bgoutput9R...')
    #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
    #9 /usr/share/openmediavault/engined/rpc/config.inc(213): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
    #10 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array)
    #11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #12 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
    #13 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
    #14 {main}

    Thank you :)


    I see that for BananaPi M2+ the only Buster package marked as "stable" is the desktop one.

    Will some of the desktop bloat be a problem?


    Alternatively they have Focal and Bionic "server" on stable. Could they go ?

    I just got this BananaPi M2+ (my first Banana...) and I thought to use it as a second OMV for local backup purposes and little else.

    My main OMV is a v4 on oDroid HC2 and it's working a treat.


    I could not find a ready-made OMV 4 or 5 img to burn for BananaPi M2+, should I use one for another platform?


    Thanks in advance for help :)

    A.

    Assuming you have two public fixed IP addresses, probably I'd go for an active-active cluster. It's just a matter of set two MX records (one for each computer), and have two A records with the same name (i.e. imap.mydomain.com) pointing to your IPs. You'd need to keep mailboxes synced between SBCs, as emails could get delivered to any of them. Many options to choose: glusterfs, lizardfs, xtreemfs, dfs over samba, or maybe even a rsync cron job would do. Sure many of them are not available for arm.


    If both machines are going to be behind a single IP using NAT, and/or your IP address(es) is/are dynamic, then rent a VPS.


    Depending on the VPS provider, the alternative is not a so secure one. Or am I overworrying ?


    I got "nothing to hide but my normal privacy". What I would like is a situation where my emails are not being used as a data pool to study my habits. How can I be sure the VPS provider doesnt share my data with anyone?

    In terms of supported mail server suits I was looking at iredmail as a first option. Apparently, it works on all platforms where debian 9 or ubuntu 18.4 work.
    I found no mention of ARM exceptions on their website.


    And yes the HC2 seems a good choice if a bit overkill.
    Indeed I already own 1 which I am using as an OMV nas machine for general repository purpose. Adding a second would be fine.
    It's sole drawback (as a nas) is its tendency to overheat a bit but I presume that my mailserver application would not stress it anyway so...

    Hello everyone


    as this is offtopic in terms of "NAS-relation", I am looking into configuring a domestic mail server (postfix centered, with all the "usual" complementary packages for IMAP, antispam, etc etc).


    My questions are:
    - which SBC should I choose ?
    - any suggestions in terms of linux based sw packages, barred the usual suspects ?


    Sheer performance, let's face it, is not an issue in my case:
    - Very few users (5-10 max)
    - Very limited traffic (way less than 100 emails per day)
    - 1TB mail archive space will be A LOT


    Reliability is much more.
    I want
    - a stable platform
    - to add redundancy (not necessarily an active-active cluster; something that I can bring up when the main one fails, even from remote, RTO 30 minutes, RPO 1 hour would be OK)


    Thanks in advance for any idea :)


    Ciao
    A.

    I sniff a nasty smell.


    Let me see if I'm right: you are fine as long as your server box works only from its SD card and/or its system disk. When you connect (via USB maybe?) other HDs the system becomes instable, if at all usable. Is this your case?


    If so forget sw for now. First thing I would check is your power supply.

    The fact that you are asking tells me that you are probably using an arm device. If you aren't, the official plex repo is enabled by omv-extras and you should see updates when they are available. That said, you should use docker either way since the plugin is going away - Installation and Setup Videos - Beginning, Intermediate and Advanced


    FWIW: I've been using the Docker container on OMV over an ARM box as per above referenced instructions, and it works OK.
    In presence of an active PlexPass it also takes care of PMS version updates for you.


    Very recently, though, I switched over to Emby for a number of (to me) good reasons. But that's another story :-)

    Yes Duplicati is checking on the target, which is OneDrive so it is relatively slow access. But this should have no impact on originating server resources. It should just "take longer" then it would if targeting a local server.


    Duplicati is run from within a Container and (of course) container data (as overall Docker data) are on HD, not on SD card.

    Mmm. From what you write I supsect I do have some investigation to run on my OMV but I do not know where to dig.


    Net topology is totally straight in my case. Only 1 main switch. On 1 port I got the EdgeRouter. On 2 other ports I got 2 WAPs. Other ports for 3-4 ARM servers. All clients (PCs, tablets) connect via wifi.


    No I don't use snapshot mode (basically because I don't know how to). I just dumbly "push" from one folder onto Rsync server daemon. That said I'm quite positive only what has changed gets sent anyway, which is very little on a daily basis.


    Duplicati also sends only changed blocks (not even files).


    All tasks run of course very early morning like you mentioned (5AM or so), which is on one side OK as there is no one using the system anyway so who cares if it crawls for 20 minutes or so (all jobs included - yes each one goes for like 4 min, 8min, 3min... due to very little changes to be accounted for).
    But on the other hand this means that those sole tasks are able to flatten the server on their own, as nothing else is happening when they run idd, which, again, seems "odd" to me.

    So in a nutshell the answer to my question is "yes". Aright I'll live with it.


    @Adoby
    Yes I'll launch Duplicati on the backup box to oflload some work.
    Already thought about it, together, maybe, with "pulling" rsync from the remote machine instead of "pushing" into it from OMV.
    Yes I'm using a "normal" filesystem on OMV (EXT4 unencrypted, idd)
    And yes, I'm using a GbE switch. All involved servers have wired eth connections to it. Main router is an EdgeRouter X which elsewhere proved to be powerful enough to support my internet bandwith (980something mbits dowload, 390something upload - the latter being involved with Duplicati backup ops)

    Hello all


    I backup data from my OMV 4 in 2 ways
    1 - via rsync push into a lan-based rsync daemon server. Work is carried by a number of rsync jobs defined and managed via OMV's rsync plugin.
    2 - via Duplicati (installed as a container), sending encrypted backups towards a OneDrive account.


    It all works, but not satisfactorily. The problem is that both job types seem to "suck all hw resources dry" when they work.
    In practical terms, what happens is that while either Duplicati or Rsync are doing their stuff
    - it's impossible to log into OMV's GUI
    - it's also impossible to SSH-in (a timeout happens)


    Underlaying hardware is Odroid HC2.
    Flashmemory plugin is indeed up and running.


    Is this to be considered "normal" (meaning, is the something one has to expect from an ARM-based platform etc) or is there something I need to check on my configuration?


    Thanks for any help


    A.

    Linux seems to be pretty good at it and OMV is Linux.

    Of course :)
    I actually meant something a bit more elaborate but I realize I did not convey it. Let me explain better.
    When I am in front of elaborate systems like OMV I prefer acting "from above them", instead of "from underneath".
    My fear is that fiddling with routing, networking, etc "at the debian level" ("undeneath OMV") might create problems "at the upper floor".
    Is that an ungrounded fear? I would be happy! :)




    Pretty sure the container host has to be on the subnet for container to be on a different subnet.


    Not sure if I understood your response.
    Are you saying that for a container to stay on 10.20.x.x, also the OMV underneath it must stay on 10.20.x.x ?
    If that is true, then Docker, or OMV's implementation thereof, is not suitable for what I want, which is having various "containers" (be them docker containers, or vms, or whatever), each on a different subnet, on just one piece of hw. (I know 100% this can be done via VMs, dunno about "VM hosted by OMV".)

    No experience with a DDNS provider but I would suggest to make a "stupid test" before: get your current public IP address, grab your smartphone (i.e., connect from another network) and check wether ports 80 and 443 are indeed landing onto the nextcloud machine.


    When I set the same configuration up some days ago (with a static IP) I spent 3 hours yelling before realising on my router port forwarding configuration was NOT "easy set up" as it appeared it would.