Change mountpoint from dev-disk-by-id to dev-disk-by-label

  • Hello all,


    I am combining two old OMV systems into a new box. For simplicity I've labelled the main drives (using e2label) as disk[1-8]


    Under /srv/ I have 2 drives that haven't mounted with labels. This is output of lsblk:

    How can I change sdc1 and sdd1 to mount at dev-disk-by-label-disk[6-7] like the rest?


    Thanks for reading.

    OMV 6 on Raspberry Pi 3/4/5, Odroid C2/HC2 and three cobbled together x64 boxes running Snapraid.

  • OK, answering my own question... I'd be grateful if someone with more linux wisdom could check this over though!


    I went into /srv and did

    sudo mkdir /dev-disk-by-label-disk6

    sudo mkdir /dev-disk-by-label-disk7


    Then

    sudo mount /dev/sdc1 /srv/dev-disk-by-label-disk6

    sudo mount /dev/sdd1 /srv/dev-disk-by-label-disk7


    Then

    sudo nano /etc/fstab to edit the fstab and replace

    dev-disk-by-id-ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T1134175-part1

    dev-disk-by-id-ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T1152903-part1

    with

    dev-disk-by-label-disk6

    dev-disk-by-label-disk7


    Rebooted, all seemed fine.


    Finally, I went into /srv again and checked the old mountpoints were empty:

    dev-disk-by-id-ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T1134175-part1

    dev-disk-by-id-ata-WDC_WD30EFRX-68AX9N0_WD-WCC1T1152903-part1


    They were so I deleted them.

    OMV 6 on Raspberry Pi 3/4/5, Odroid C2/HC2 and three cobbled together x64 boxes running Snapraid.

  • It is likely that your manually defined mountpoints inside the openmediavault stanza within fstab will not be permanent and lost the next time OMV processes a configuration change.


    Look in OMV | Storage | File Systems and enable the mount point display column being visible and see what is shown there.


    You may also find that drives mounted this way are unavailable in OMV's drop down selection lists used when creating shares, etc.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • Thanks gderf,


    So far so good, I enabled that mountpoint column (very useful, never realised it was there) and those drives are using disk-by-label.


    Once I was sure this was working as intended I rebooted and recreated my union/fuse mountpoints with the OMV ui for disks 5-8


    I will be creating shares using the union mountpoint rather than the disks so I think I'm OK.

    OMV 6 on Raspberry Pi 3/4/5, Odroid C2/HC2 and three cobbled together x64 boxes running Snapraid.

    Einmal editiert, zuletzt von NothingNowhere ()

  • NothingNowhere

    Hat das Label gelöst hinzugefügt.
  • Hello,


    Sorry for warming up this "old" thread but I had more or less the same situation with changing mount points this weekend and it costs me several hours to figure out what happend.


    I'm using LUKS and LVM to manage disk-space. Earlier, all the LVM disks (BTRFS) was mounted as "disk-by-label-" and i used these mount-points for the Docker-"config-environment". After a clean shutdown, (had to move the server) RAID (mdadm) came up "degraded-RO". (came out it was a loose SATA-cable).

    After putting in the LUKS password, the LVM-disks were mounted automatically (never happend before) and only one LVM-disk (out of 8)was mounted as "disk-by-id..." too bad that it was the LVM-disk with Docker-container-configs, so all containers were f****d up and can't find their config files anymore.

    For fixing this (after I found out, it was different mount point and not broken RAID), I had to edit all Docker-Stacks, change "disk-by-label..." to "disk-by-id..." for the-config folders, update the Stack and restart it. Then it worked again.

    Really strange why this happens...

  • Creating symlinks may have worked, saving you the trouble of changing all those docker configs.



    /srv/dev-disk-by-label-.......... -> /srv/dev-disk-by-id-..........

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    Is there a reason why new btrfs volumes now mount with dev-disk-by-id-ata- instead dev-disk-by-label-?

    Yes


    There are several threads if you search . But some enclosure chipset was causing issues with ext4 and and disk labels.. so ext4 partitions (newly created ones) will only mount by ID.


    As said, symlinks is the easy way to get around this, and frankly this is one reason I went to symlinks exclusively

  • gderf : Yes, -agree. It would work with symlinks, for sure! I considered that after I changed the config files.:/


    But my question is why just one LVM-disk changed mount point (all on the RAID are BTRFS) and the rest is still "disk-by-label-"? Wouldn't it make sence if all disks change to "disk-by-id-" if there's a background process that cause this change?


    KM0201 : Thanks for the hint! I will search the forum and maybe find some more information about this behavior.

  • Yes


    There are several threads if you search . But some enclosure chipset was causing issues with ext4 and and disk labels.. so ext4 partitions (newly created ones) will only mount by ID.


    As said, symlinks is the easy way to get around this, and frankly this is one reason I went to symlinks exclusively

    Thanks for your feedback. But all I could always find was that BTRFS volumes should still mount by label. But they don't, so I was wondering if I missed something.

    • Offizieller Beitrag

    Thanks for your feedback. But all I could always find was that BTRFS volumes should still mount by label. But they don't, so I was wondering if I missed something.

    As far as I know, the problem only effected ext4, and btrfs should be OK, but who knows if that changed. Sorry I confused your post with one of the earlier ones and did not realize you were using btrfs.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!