Posts by ptruman

    Resurrecting zombie-thread :)

    I was starting to look into Salt - but can't easily see how to set up new states, as (to me?) omv-salt isn't 'salt'?

    I've got around this for now by updating /etc/default/openmediavault and adding


    and then

    omv-salt stage run prepare

    It would be good to have the SSH Challenge option in the GUI :)

    I can now boot, however I see an error on boot about /dev/md0 still being in use (and /sys/module/md_mod/parameters/new_array) but it's now booting AND mounting my data volume - but I suspect this is due to my kludge, not a "proper" fix. I don't deem this stable (or at least not without an explanation).

    I'm rather sceptical about what has changed where :\

    cat messages.1 | grep "Jun 11" | grep pve
    Jun 11 07:48:51 openmediavault cron-apt: pve-headers-5.4.119-1-pve pve-kernel-5.4.119-1-pve
    Jun 11 07:48:51 openmediavault cron-apt: libwebp6 libwebpmux3 pve-headers-5.4 pve-kernel-5.4

    So I think apt or initram (after pve kernel download - the last was Jun 11th...) did something

    Forcing the initramfs per suggestion broke the older installed kernels, so I couldn't (easily) recover - until I did the /etc/mdadm/mdadm.conf fix, which is manual and a bit worrying, given the file states (at the top):

    # This file is auto-generated by openmediavault (
    # WARNING: Do not edit this file, your changes will get lost.

    So what broke my /etc/mdadm/mdadm.conf file? How do I get this back? From what I can see in the OMV saltstack files all OMV is doing is:

    - name: "mdadm --detail --scan >> /etc/mdadm/mdadm.conf"

    So what caused it to default back to /dev/md0 when I clearly already have a /dev/md0 - as /dev/md/debian:0 is a symlink to /dev/md0

    Should I change /etc/mdadm/mdadm.conf to be:

    ARRAY /dev/md/debian:0 metadata=1.2 name=debian:0 UUID=90dece3a:d04b3040:2c4fa91c:9c57ccb2
    ARRAY /dev/md/debian:1 metadata=1.2 name=debian:1 UUID=e4673560:6b1f8d7a:be5a9a98:fcb7ccc9
    ARRAY /dev/md/OMVDataRAID metadata=1.2 name=OMVDataRAID UUID=bbfef1bf:cd1f8aab:2421bb14:c4cc3028 I can't see what is clashing with /dev/md0... :\

    If I (or OMV) does do a --detail --scan now I/it will get the same output back in the file but I can't see why it changed in the first place.


    Do I need to redo the update-initramfs -k all -u again?

    Small update - /etc/mdadm/mdadm.conf listed the drives thus:

    ARRAY /dev/md/debian:0 metadata=1.2 name=debian:0 UUID=90dece3a:d04b3040:2c4fa91c:9c57ccb2

    ARRAY /dev/md/debian:1 metadata=1.2 name=debian:1 UUID=e4673560:6b1f8d7a:be5a9a98:fcb7ccc9

    ARRAY /dev/md0 metadata=1.2 name=OMVDataRAID UUID=bbfef1bf:cd1f8aab:2421bb14:c4cc3028

    Given /dev/md/debian:0 symlinks to /dev/md0 - the last line is never going to work....

    So, I changed the last line to :

    ARRAY /dev/md127 metadata=1.2 name=OMVDataRAID UUID=bbfef1bf:cd1f8aab:2421bb14:c4cc3028

    and mdadm --assemble --scan worked

    I could then mount /dev/md127 and things reappeared.

    Trying reboot

    Right, major issue now

    I could previously reboot to an old version. Now I've done the update-initramfs -k all -u thing (as suggested), ALL kernels don't load my OMV Data drive.... :\

    If I try mdadm --assemble --scan I get

    mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
    mdadm: /dev/md0 is already in use

    if I cat /sys/modules/md_mod/parmeters/new_array I get permission denied

    it is u+w only (not read)

    If I set u+r on new_array and cat it, I get "Operation not permitted" (and I then set u-r on it again)


    All I have (after initramfs) is:

    update-initramfs: Generating /boot/initrd.img-5.4.119-1-pve

    update-initramfs: Generating /boot/initrd.img-5.4.114-1-pve

    update-initramfs: Generating /boot/initrd.img-5.4.106-1-pve

    update-initramfs: Generating /boot/initrd.img-5.4.103-1-pve

    update-initramfs: Generating /boot/initrd.img-5.4.101-1-pve

    Interestingly, I've also noticed that my "many kernels" have been reduced back to 5 (I had another thread about them racking up)

    So something in the OMV-Extras/reboot IS tidying them up....

    It's softraid, and I have UPS

    But from what I can see in default nut, the default command to shutdown in /etc/nut/upsmon.conf - was /sbin/shutdown

    and shutdown ACTUALLY lives in /usr/sbin/shutdown

    If that's the case, that could affect all OMV users? :)

    So it didn't shutdown...

    Should I try the sudo update-initramfs -k all -u under the new (failed) kernel or from the working one (and then reboot?)


    Due to a powercut my system rebooted.

    When it did, I had no Docker containers, and when I checked, my OMV share drive was missing.

    I have rebooted back to use 5.4.106-1-pve #1 SMP PVE 5.4.106-1 and it works. I used that as a panic default, but I can see in logs I was under 5.4.114-1 before.

    However, under 5.4.119-1-pve, when I start, it fails to mount OMV data partition.

    mdadm --assemble --scan then reports /dev/md0 is already in use

    Clues? (also to warn others!)

    the output of

    cat /etc/default/openmediavault | grep TEAM

    By running the other commands, I got no error.

    I comment out the teamviever-line in sources.list.d/omvextras.list, now it's running ok.

    But that isn't a good solution :-(

    Came here this morning to see why it was broken, and found the teamviewer repo error....

    Wondered why, as I don't have it enabled, and then found it was claiming a repo signing error, but it's actually 403 rejecting everyone who tries to connect.

    I presume they've broken it nicely, so I've temp commented it out, and (amusingly) have nothing to update.

    Hopefully it's back in 24 hours or so...

    I've got the firmware image also (just omitted it as it wasn't headers/kernel).

    They all show up on the grub menu - but that's where I noticed - after the crash I had, the boot menu was "large" :)

    root@openmediavault:/home/root# apt-get --purge autoremove
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Well, it's certainly adding them - I didn't know if it had a limit (I'll soon find out if it hits 11). I know votdev has commented before that it's an owners responsibility normally to remove them in case stuff breaks :)

    apt-get --simulation autoremove shows nothing to remove.

    dpkg -l | grep pve shows I have:

    5.4.103-1 is currently active, pending a reboot to switch to 5.4.106-1

    Slightly off topic question, but does the extras functionality remove the older proxmox kernels?

    (or is this a manual task?)

    I've now got 10 proxmox/pve kernels and 10 recovery images showing in the dropdown/boot screen.

    (also had a random crash yesterday, with maxed out IO-WAIT - not sure what occurred/where - but it rebooted to the latest (yesterday) kernel and then downloaded another one today :)

    Thanks (and I see similar from that poster and another now) - I did a search for these packages before I posted and didn't get a response (mind you I got an error when I submitted the post too, so...)

    I shall leave it do it's thing.

    I have a salient tale of "not paying attention" before on a previous OMV install when I installed ntpd and it broke *all the things*. Stuff lives in Docker now!


    dpkg -l |awk '/^ii/ && $3 ~ /bpo/ {print $2}' 

    ...will show what is installed from backports, which (in my case) is: