Posts by dsm1212

    Ok, I think i misunderstand what was being printed by "mdadm --detail /dev/md127". In that it says something like name: warehouse21:127. When I take this one down and make the change it then shows warehouse21:md127. So the only thing wrong on my system was the name in mdadm.conf and that got fixed up with your first suggestion (to update the config xml). thanks

    I'm wary of instructing you to edit config.xml even though votdev suggested it as it's not normal practice. But..


    In /etc/openmediavault/config.xml look for mntent. There should be one for each mount point. If any of them use dev-by-id instead of dev-by-uuid you'll want to change them. See above I changed fsname field to use uuid. Don't change the dir field because that will change the disk mount point. The uuid to use is retrieved from the shell with "blkid /dev/md/2" or whatever the md device is you are fixing. I then ran "omv-salt stage run all" which is heavyhanded but I couldn't tell which step would fix mdadm.conf (maybe just omv-salt stage run fstab followed by omv-salt stage run mdadm?). The run all command sort of regenerates all config from what is configured in omv. After this I rebooted. df shows the mount point now using UUID instead of name. Also if I cat /etc/mdadm/mdadm.conf the names are gone so the error is gone.


    But my remaining question is that the raid volume still has a non-posix name. That seems to not matter or should I fix it? The names have been removed so I think I should just call "omv-salt deply run initramfs" and then reboot again to get the arrays rebuilt. SInce the names were removed from mdadm.conf I'm not sure what their new name will be.


    steve

    Rebooted fine, md command errors are gone. Do I need to bother to trigger the rebuild to change the name?


    # mdadm --detail /dev/md/2

    /dev/md/2:

    Version : 1.2

    Creation Time : Mon Jan 10 20:18:14 2022

    Raid Level : raid1

    Array Size : 204425344 (194.96 GiB 209.33 GB)

    Used Dev Size : 204425344 (194.96 GiB 209.33 GB)

    Raid Devices : 2

    Total Devices : 2

    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Sat Dec 27 22:52:32 2025

    State : clean

    Active Devices : 2

    Working Devices : 2

    Failed Devices : 0

    Spare Devices : 0


    Consistency Policy : bitmap


    Name : warehouse21:2 (local to host warehouse21)

    UUID : ad55e1a4:e435382c:ecedb5f0:26762e64

    Events : 105434


    Number Major Minor RaidDevice State

    0 8 22 0 active sync /dev/sdb6

    2 8 54 1 active sync /dev/sdd6

    I changed the fsname line in the mntent entry (see below). I got the 92* uuid from "blkid /dev/md/2".


    I ran "omv-salt stage run all" because I couldn't see how to regenerate mdadm.conf. fstab looks ok. mdadm.conf looks ok but it doesn't have the names anymore at all. Is that expected?


    Since this is independent of the mdadm name change maybe I should reboot first with just this if it looks ok.


    Or do I even need to bother with the initramfs update to trigger the reassembly? Things seem to be working with the current name on the assembled array. Removing the names from the mdadm.conf stopped the messages.


    thanks

    steve


    <mntent>

    <uuid>a6ff651d-83d9-403f-96e8-ebd4a1aa5f6c</uuid>

    <fsname>/dev/disk/by-uuid/92f79319-8123-4cf6-b14e-79d5c5c017fc</fsname>

    <dir>/vol/dev-disk-by-id-md-name-warehouse21-2</dir>

    <type>btrfs</type>

    <opts>defaults,nofail,ssd</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <comment/>

    <usagewarnthreshold>85</usagewarnthreshold>

    </mntent>


    mdadm.conf now with no names:


    # definitions of existing MD arrays

    ARRAY /dev/md/127 metadata=1.2 UUID=038af736:eba0dd95:89b6824a:5c7b92d2

    ARRAY /dev/md/2 metadata=1.2 UUID=ad55e1a4:e435382c:ecedb5f0:26762e64

    ARRAY /dev/md/0 metadata=1.2 UUID=96e1785b:70856227:5534c7a0:4888fda0

    It looks like posix allows a hyphen so I'm inclined to change the colon to a hyphen.


    But I'm a bit confused on the steps to fully fix this. I think this is what I'm reading:


    1. Edit mdadm.conf to make the names compliant (change : to -)

    2. Invoke "update-initramfs -u"

    3. Edit /etc/fstab and change any id/name based mounts to use UUID.

    4. Cross fingers and reboot.


    For 3 I only have one entry using by-id and it was put there by openmediavault (at least it has ">>> openmediavault" comments around it). Is it ok to change this to UUID or will OMV put it back to possibly the wrong name at some point? Do I have to update some OMV config if I fix this name?


    thanks

    I see I've run into the posix non-compliant name issue. (anacron error)


    /etc/cron.daily/openmediavault-mdadm:
    mdadm: NewArray event detected on md device /dev/md2
    mdadm: Value "warehouse21:2" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.
    mdadm: Value "warehouse21:127" cannot be set as devname. Reason: Not POSIX compatible. Value ignored.


    I can edit mdadm.conf to remove the colons and it sort of makes the md command errors go away, but is this going to mess with my dev names? I have not rebooted yet. The mountpoint for one of these is "dev-disk-by-id-md-name-<hostname>-2. I believe that was generated somewhere by changing :2 to -2. Hopefully that is baked in once generated?


    Anyhow after removing the colons, the md commands don't error, but if I list the array it still has the non-compliant name. This seems to be a bear to fix because the volumes have to be reassembled which means a rescue disk (awkward for a headless nas). I'm surprised I couldn't find a topic here about this. Suggestions on the right steps to take?

    The selection highlighting is still not working. As I described above:


    If the Open UI comes up in day mode then when I open the Host Shell (or any container shell), highlighting doesn't indicate, the selection works though.


    If I started in night mode and then switch to day mode it works.


    If I start in day mode (it doesn't work), then switch to night mode (it works), then switch back to day mode (it doesn't work).


    Something isn't getting initialized right when starting in day mode.

    Ok, so this is totally repeatable. If I last use the cterm to "Host shell" in night mode, then when I select "Open UI" it opens in night mode and selection is good. If I switch to day mode selection is also working. But if I'm in day mode last so that the "Open UI" starts in day mode then when I open "Host Shell" selection is not visible. It is selecting but it's not visually indicated.

    When I connect to the "host" in the container list, I get this output:


    mesg: cannot open /dev/pts/4: Permission denied


    I believe this has to do with logging in as root and then using su to a specified user. I think the answer is that the su should use "su -P", but I'm not sure.


    Also, maybe more important :-), when in "day mode" text highlighting doesn't work. So that mesg line above when I tried to copy it shows no selection. In night mode it shows inverted.


    steve

    I use swag and don't have this issue. You should check your the log/nginx folder to see if the logs will tell you anything. Also check the swag docker log output. Otherwise you will have to look through your config. You must have set up some proxy configs? Maybe one of them is causing the issue. Maybe start fresh, make sure it's ok, and then add in one change at a time.

    So my original interface is a bonded link and oddly that page lets me clear the dns field and save so I have a way to remove the dup for now. But it seems odd the simple interface (which has mac address above the dns field) won't let me save dns list blank.