After OMV7 -> OMV8 Service mdmonitor is already enabled, and is dead

  • Upgrading from 7 to 8 was smooth and overall OMV is such a great package.
    However, after logging in again after rebooting there was a yellow banner "Pending configuration changes" stating that the following modules would be updated: mdadm, monit, nut, postfix, smartmontools, and zfszed.


    Analyzing the error message that followed it appeared that the mdadm update failed (see further the attachment Clipboard.txt) as the restart_mdmonitor_service was "dead". Configuring and saving the notification settings did not help.


    Curiously enough the first line in the attachment mentions omv-salt which I believed to be associated with OMV 7.x ?
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color mdadm 2>&1' with exit code '1': <hostname>:
    Finally I ran the fix7to8upgrade script following some leads from a blog (unable to find an official pointer) but this did not change the situation.

    It would be great if someone can find the time to advise me as to what may be the cause of this error because I am unable to get past this yellow banner.
    --theo

    • New
    • Official Post

    omv-salt applies to omv8 too.


    What is the output of:


    dpkg -l | grep openme

    sudo omv-salt deploy run mdadm

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.3 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you for responding so swiftly! In order to get beyond the yellow banner I uninstalled the md plugin but reinstalling failed. The second command made the error reappear.


    root@hp-elite:~# dpkg -l | grep openme

    ii openmediavault 8.0.9-1 all openmediavault - The open network attached storage solution

    ii openmediavault-autoshutdown 8.0 all OpenMediaVault AutoShutdown Plugin

    ii openmediavault-backup 8.0.1 all backup plugin for OpenMediaVault.

    ii openmediavault-compose 8.1.2 all OpenMediaVault compose plugin

    ii openmediavault-cputemp 8.0 all cpu temperature plugin for openmediavault

    ii openmediavault-cterm 8.0 all openmediavault container exec terminal plugin

    ii openmediavault-hosts 8.0-3 all openmediavault hosts plugin

    ii openmediavault-kernel 8.0.6 all kernel package

    ii openmediavault-keyring 1.0.2-2 all GnuPG archive keys of the openmediavault archive

    ii openmediavault-md 8.0.2-1 all openmediavault Linux MD (Software RAID) plugin

    ii openmediavault-nut 8.0-7 all openmediavault Network UPS Tools (NUT) plugin

    ii openmediavault-omvextrasorg 8.0.2 all OMV-Extras.org Package Repositories for OpenMediaVault

    ii openmediavault-onedrive 8.0-5 all openmediavault OneDrive plugin

    ii openmediavault-salt 8.0 amd64 Extra Python packages required by Salt on openmediavault

    ii openmediavault-sharerootfs 8.0-1 all openmediavault share root filesystem plugin

    ii openmediavault-wol 8.0 all OpenMediaVault WOL plugin

    ii openmediavault-zfs 8.0.3 amd64 OpenMediaVault plugin for ZFS

    root@hp-elite:~# ^C

    root@hp-elite:~# sudo omv-salt deploy run mdadm

    HP-ELITE.lan:

    ----------

    ID: remove_cron_daily_mdadm

    Function: file.absent

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: File /etc/cron.daily/mdadm is not present

    Started: 17:25:35.326097

    Duration: 0.597 ms

    Changes:

    ----------

    ID: divert_cron_daily_mdadm

    Function: omv_dpkg.divert_add

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: Leaving 'local diversion of /etc/cron.daily/mdadm to /tmp/_etc_cron.daily_mdadm'

    Started: 17:25:35.327124

    Duration: 9.579 ms

    Changes:

    ----------

    ID: configure_default_mdadm

    Function: file.managed

    Name: /etc/default/mdadm

    Result: True

    Comment: File /etc/default/mdadm is in the correct state

    Started: 17:25:35.336873

    Duration: 11.364 ms

    Changes:

    ----------

    ID: configure_mdadm_conf

    Function: file.managed

    Name: /etc/mdadm/mdadm.conf

    Result: True

    Comment: File /etc/mdadm/mdadm.conf is in the correct state

    Started: 17:25:35.348356

    Duration: 9.073 ms

    Changes:

    ----------

    ID: mdadm_save_config

    Function: cmd.run

    Name: mdadm --detail --scan >> /etc/mdadm/mdadm.conf

    Result: True

    Comment: Command "mdadm --detail --scan >> /etc/mdadm/mdadm.conf" run

    Started: 17:25:35.358086

    Duration: 4.786 ms

    Changes:

    ----------

    pid:

    25044

    retcode:

    0

    stderr:

    stdout:

    ----------

    ID: restart_mdmonitor_service

    Function: service.running

    Name: mdmonitor

    Result: False

    Comment: Service mdmonitor is already enabled, and is dead

    Started: 17:25:35.377191

    Duration: 213.554 ms

    Changes:


    Summary for HP-ELITE.lan

    ------------

    Succeeded: 5 (changed=1)

    Failed: 1

    ------------

    Total states run: 6

    Total run time: 248.953 ms

    [ERROR ] Service mdmonitor is already enabled, and is dead

    root@hp-elite:~#

  • votdev

    Added the Label Upgrade 7.x -> 8.x
  • votdev

    Added the Label OMV 8.x
    • New
    • Official Post

    What is the output of: journalctl -u mdmonitor.service -n 200

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.3 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The output is as follows:

    root@hp-elite:~# journalctl -u mdmonitor.service -n 200

    Feb 04 15:56:06 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 15:56:06 hp-elite mdadm[2498730]: mdadm: No array with redundancy detected, stopping

    Feb 04 15:56:06 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 15:56:06 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 15:56:06 hp-elite mdadm[2498749]: mdadm: No array with redundancy detected, stopping

    Feb 04 15:56:06 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    -- Boot 9907253642da4c1b98620f100e0a3527 --

    Feb 04 16:10:33 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:10:33 hp-elite mdadm[5137]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:10:33 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:21:26 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:21:26 hp-elite mdadm[39652]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:21:26 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:27:25 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:27:25 hp-elite mdadm[46764]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:27:25 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:27:25 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:27:25 hp-elite mdadm[46780]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:27:25 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:29:18 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:29:18 hp-elite mdadm[49361]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:29:18 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:31:51 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:31:51 hp-elite mdadm[51030]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:31:51 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:32:18 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:32:18 hp-elite mdadm[51312]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:32:18 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:36:34 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:36:34 hp-elite mdadm[53687]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:36:34 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 16:38:42 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 16:38:42 hp-elite mdadm[54769]: mdadm: No array with redundancy detected, stopping

    Feb 04 16:38:42 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.


    root@hp-elite:~#


  • PS To be clear there is no multidisk array present on this host. The MD plugin was installed just in case an array (with two HDDs) would be moved from another host to this one. But MD did not cause any issues, that is, until after updating from 7 to 8.

    • New
    • Official Post

    To be clear there is no multidisk array present on this host.

    That shouldn't change anything I am asking you for.


    What is the output of:


    sudo systemctl status mdmonitor.service

    sudo systemctl start mdmonitor.service

    sudo systemctl status mdmonitor.service

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.3 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Great to hear. Here is the output for alll three commands:

    root@hp-elite:~# sudo systemctl status mdmonitor.service

    ○ mdmonitor.service - MD array monitor

    Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; static)

    Active: inactive (dead)

    Docs: man:mdadm(8)


    Feb 04 17:08:18 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 17:18:42 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 17:18:42 hp-elite mdadm[21257]: mdadm: No array with redundancy detected, stopping

    Feb 04 17:18:42 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 17:25:35 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 17:25:35 hp-elite mdadm[25056]: mdadm: No array with redundancy detected, stopping

    Feb 04 17:25:35 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 17:54:44 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 17:54:44 hp-elite mdadm[38799]: mdadm: No array with redundancy detected, stopping

    Feb 04 17:54:44 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    root@hp-elite:~#

    root@hp-elite:~# sudo systemctl start mdmonitor.service

    root@hp-elite:~# sudo systemctl status mdmonitor.service

    ○ mdmonitor.service - MD array monitor

    Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; static)

    Active: inactive (dead)

    Docs: man:mdadm(8)


    Feb 04 17:18:42 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 17:25:35 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 17:25:35 hp-elite mdadm[25056]: mdadm: No array with redundancy detected, stopping

    Feb 04 17:25:35 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 17:54:44 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 17:54:44 hp-elite mdadm[38799]: mdadm: No array with redundancy detected, stopping

    Feb 04 17:54:44 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    Feb 04 18:22:58 hp-elite systemd[1]: Started mdmonitor.service - MD array monitor.

    Feb 04 18:22:58 hp-elite mdadm[63002]: mdadm: No array with redundancy detected, stopping

    Feb 04 18:22:58 hp-elite systemd[1]: mdmonitor.service: Deactivated successfully.

    root@hp-elite:~#



    • New
    • Official Post

    It seems like mdmonitor doesn't stay running when you have no arrays. But it exits with a 0 return code when starting, restarting, or enabling on my dev system. All I can think is uninstall the plugin until you need it unless Volker has an idea how to deal with the odd service behavior.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.3 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Uninstalling the MD plugin is a perfectly acceptable solution at this stage, although it would be more satisfying to know what caused this.

    So I uinstalled the MD plugin which rendered an error message. Perhaps it is significant, maybe not.
    In any event, with MD uninstalled (refreshed the plugins view to make sure of that) and unable to cause further trouble all other modules specified by "Pending configuration changes" were updated.

    Thank you for your support.


    Failed to read from socket: Connection reset by peer


    OMV\Rpc\Exception: Failed to read from socket: Connection reset by peer in /usr/share/php/openmediavault/rpc/rpc.inc:172

    Stack trace:

    #0 /usr/share/php/openmediavault/rpc/proxy/json.inc(94): OMV\Rpc\Rpc::call()

    #1 /var/www/openmediavault/rpc.php(45): OMV\Rpc\Proxy\Json->handle()

    #2 {main}

    • New
    • Official Post

    although it would be more satisfying to know what caused this.

    From what I am seeing, saltstack is enabling and starting the unit but it exits immediately. When saltstack checks to see if the unit is running right after starting, it throws an error. I don't think this is the fault of saltstack. I think it is weird behavior from the mdmonitor service.

    So I uinstalled the MD plugin which rendered an error message. Perhaps it is significant, maybe not.

    It isn't. omv-engined restarts when installing or uninstalling a plugin. It can sometimes catch and update in flight. Doesn't hurt anything.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.3 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • votdev

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!