Posts by molnart

    i wasn't really able to resolve my issue, except taking care of the order i am starting/shutting down my VMs.

    however, I have a different problem:

    when adding a new folder into my shared folders, sometimes this folder cannot be seen on the client. the folder is visible on the OMV instance under /export folder, but not on the client. i have checked the folder and file permissions, but they match the settings of other folders that are visible. any idea what could be wrong here?

    Recently I have moved OMV to a Proxmox VM and now i am trying to move the 20 docker containers running on OMV to separate VM i have already set up.

    A lot of containers are dependent on data provided by OMV (Plex, Transmission, etc.) and I am thinkning about the best way to share them to my Docker VM. I have set up the NFS shares, used autofs on the docker VM to mount them easily. However my problems start when the OMV VM is down for some reason (maintenance, etc). In this scenario my docker VM is behaving unreliably - e.g. i am not even able to shut it down properly as its trying to unmount the (at the moment) non-existing shares for ages (gave up after approx. 30 mins).

    any ideas and best practices?

    its surprisingly easy. i have used omv-backup plugin previously to regularly back up my omv installation via fsarchiver method.

    so basically what i did:

    1) installed proxmox on a new drive

    2) created a virtual machine for OMV with a blank 64 GB HDD and passed trough the drive containing the omv-backup to it

    3) booted up systemrescue cd on the OMV VM

    4) used this guide to restore the fsarchive backup to the new disk

    5) passed trough the original disks to the OMV VM

    6) booted up the OMV VM and set up networking over omv-firstaid

    everything is working, but i am getting an mdadm error during boot like described here RE: mdadm: no arrays found in config file or automatically haven't looked very deeply into that for now

    probably you are past that step already, but I already did the very same during the weekend and mergerFS and snapraid work without any issues. however there is one thing to consider: it seems to be possible to run OMV in an LXC container instead of a VM, utilizing the host kernel with much less CPU overhead. currently I am looking into this further how to convert the existing VM into an LXC.

    lately i am getting increasing number of errors from my ups, flooding my inbox with notifications like Communications with the UPS eaton3s cannot be established, Communications with the UPS eaton3s are lost, status failed (1) -- Init SSL without certificate database and so on. i have even added pollinterval=15 to my configuration file, but did not really help, the number of errors is increasing constatly, and sometimes the UPS gets even lost from lsusb (which makes me think this is a hardware issue).

    but lately i got fed up, removed the omv-nut plugin and apt purged all the nut* packages, but my console is still regularly interrupted by messages like the one below:

    Broadcast message from nut@omv.localhost (somewhere) (Tue Jun 23 22:30:06
    UPS eaton3s is unavailable

    any idea why is this happening?


    i have configured via the webui so that omv-backup and snapraid-diff are scheduled automatically to run once per week. However OMV scheduled both to run on Sunday 00:00:01 (or so). Now this is a bit annoying, because both jobs running at the same time slow down the system, and also this is the time when I am most likely to do something that requires cpu usage from my server (e.g. watching a movie via Plex or something). So I would prefer to run these jobs at different times, snapraid-diff on every wednesday at 3:00 AM and omv-backup on friday 03:00 AM. I could not see how to set this via the webui. Could you provide any help? Thanks

    Hi, I am having an idea, that I would need you opinions on:

    My network setup is the following: i have an ONT from my ISP providing 600/60 connectivity. Behind this ONT there is my own router handling all the routing, firewall rules etc. the ONT is set to DMZ mode towards my router (i know bridge mode would be better, but it's complicated with the ISPs tech support). Also there's a gigabit switch behind the router, where all my devices are connected to.

    Now my issue is, that my router is only a megabit one, but due to the large amount of custom config in there, i am lazy to replace it. Essentially my internal network is a gibabit one, but my internet gateway is bottlenecked by the router. Not a big deal, I don't have much use to that ISP speed anyhow.

    Here's my idea:

    eth0 of my OMV server will connect trough my router, as it is right now. eth1 of the OMV server would be connected to the ONT, getting the full WAN connectivity. Now i would somehow make transmission expose trough this network to the net, probably trough macvlan. But, i need that my other docker containers are able to communicate with transmission, while keeping them isolated from the WAN.

    The big question is: is that scenario even possible? Based on my limited docker knowledge I have some doubts.

    My another option would be to play around with VLANs in my switch, but that would be a discussion for a different subforum.

    your proposed solution does not seem to work. I have taken the SSD with openmediavault installed from my old server and put it in place of the ODD drive. the SSD booted fine, but OMV did not boot fully due to the missing disks.

    from maintenance mode i did a grub-install /dev/sdc && update-grub2 (tried it without the update-grub2 part as well just in case, as some guides elsewhere on the net did not mention it).

    but when adding a drive into the bays, the system does not boot anymore. I can see in the boot messages that it first tried to boot from the USB DriveKey, then from the harddrive (so my bios boot order is correct), but it fails to succeed.

    any ideas what else can i try?

    edit: turns out i'm just plain stupid. forgot to set the bootable flag for the usb drive :rolleyes: works now

    Hi, i need some help in migrating my OMV setup.

    So far I have used a Gen7 Microserver, but outgrew the CPU power due to playing around with docker and running several containers, so I got myself an used Gen8 microserver, quickly swapped the drives and was hoping everything will run as before... WRONG!

    My Gen7 setup is 1 SSD boot drive for OMV system & docker, 3 data drives and 1 drive for Snapraid

    So there are several issues:

    1) Gen8 needs some workaround in order to boot from the ODD SATA port, so i temporatily removed the Snapraid backup disk and placed the SSD on its place until i figure out a the SSD booting. (planning to use this approach…m-ssd-install-on-odd-bay/ but first i need to tinker with power connector)

    2) Gen8 has a RAID controller and it seems I need to initialize the disks first in the controller before doing anything. From what I read here configuring each disk as a single RAID 0 volume should keep my data intact. with adding the disks one by one I should be on the safe side, adding the snapraid disk only as last when i see everything is working.

    3) Another option seems to be to move the /boot folder of my OMV installation on the SSD to a microSD or USB device and boot from there, leaving the SATA controller in AHCI mode. Not really sure how to do that though, but i guess google is my friend.

    I am slightly leaning to the option 3, but there may be problems with OMV rewriting the fstab with each configuration change (i guess i need to edit it to point to the correct boot and system drives?)

    What is your experience/recommendation in these regards?

    EDIT: Also, here's a post claiming that Ubuntu is able to pass trough the unconfigured SATA drives to the OS with only the SSD configured as RAID0, Can anyone confirm that this works for Debian/OMV as well?

    to finally align with the Debian release scheme!

    i think OMV has been always aligned with the Debian release scheme. each OMV version came out with a new debian release.

    But when Debian+OMV are finally released we have the perspective on having an up-to-date base

    Debian has never been up-to-date and never will be. The up-to-date linux distro is Arch

    edit: UPS service disabled but running seems confusing too...

    seem to be running normally

    $ systemctl status monit
    ● monit.service - LSB: service and resource monitoring daemon
    Loaded: loaded (/etc/init.d/monit; generated)
    Active: active (exited) since Fri 2020-04-17 20:58:16 CEST; 5 days ago
    Docs: man:systemd-sysv-generator(8)
    Process: 27475 ExecReload=/etc/init.d/monit reload (code=exited, status=0/SUCCESS)

    see below. the last two commands don't say much as i cannot enable the nut service due to the above error.

    I am trying to change the smtp settings in the Notification section for sending of emails and when trying to apply the settings i am getting this error, not seemingly related to email notifications.

    Any idea what could be wrong?

    all the plugins are removed as per post #38, apt list --installed does not show them, but config.xml still has all the old references including letsencrypt and so. getting the courage to touch config.xml manually :)