OMV remembers 'drive letters' during installation and creates ghost file systems afterwards

  • Hi together,


    yesterday I did the step of migrating my homeserver from OMV4 to OMV5.


    I wanted to start over with a clean system. So I decided to wipe the system disk. First shut down the old system, then detached all drives but the system ssd and went through the setup process.
    After setup and installing some addons I reattached my drives.


    To my relief my old 4 disk Raid5 (dev/md127) and my second ssd (/dev/sde) were recognized immediately. So I mounted them and reconfigured my shares and now everything works as before.


    But one thing makes me feel very uncomfortable. It seems like one of my Raid HDDs is also appearing as Missing File System /dev/sdb1.



    I have no idea what went wrong here. In fact there shouldn't be any partitions on /dev/sdb because with my old omv4 I created a Raid over /dev/sda - /dev/sdd without any partitions.
    Running fdisk -l also doesn't show any partitions on /dev/sdb.



    Here you have my current OMV5 File Systems view



    and here for comparison an old screenshot from my previous omv4 system



    As you can see the 'device' of my Raid5 also changed from /dev/md0 to /dev/md127 (and my SWAP partition is listed under File Systems now - why?) but that shouldn't be the trigger for this strange behavior.
    I also tried what happens when I start OMV5 with detached Raid5. Without Raid drives the mysterious /dev/sdb1 entry also vanishes.



    Has anyone an idea whats the problem here?


    Thanks for your help!


    /edit: Oh, sorry. I must have just been blind. :thumbup:
    Could some mod please move this thread to the RAID section? Thanks!

  • Vielleicht hilft dir die Beantwortung dieser Fragenliste erstmal weiter und kann vielleicht schon mal etwas klären: Degraded or missing raid array questions

    @cabrio_leo Danke für den Link. Ich werde die Infos mal zusammensuchen.


    1) cat /proc/mdstat:

    Code
    root@myserver:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
          11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          [>....................]  check =  0.0% (1056320/3906887168) finish=369.7min speed=176053K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>


    2) blkid:

    Code
    root@myserver:~# blkid
    /dev/sda: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="7e3040a4-847d-74a5-f88f-5143f8c1e1ee" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/md127: LABEL="daten" UUID="93f6618c-6119-4c35-953b-67b0f175db1a" TYPE="ext4"
    /dev/sdb: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="ff046fc5-6b2d-56fb-7c80-d4a392c7fd24" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdc: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="fd82acbc-3708-cd65-6cf9-86293f1838bb" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdd: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="53b1fd9f-c3c6-ce77-294f-78ba5951a55e" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sde1: LABEL="ssd" UUID="923948de-d9a1-4262-b08a-b913fedd8e15" TYPE="ext4" PARTUUID="235f252d-fbb3-4102-9382-ea562d1c7e8b"
    /dev/sdf1: UUID="ea9995f6-9084-43b3-90d2-17bcb592ed50" TYPE="ext4" PARTUUID="1885578f-01"
    /dev/sdf5: UUID="0353bed2-8aa8-4524-b817-b9173f3b55a0" TYPE="swap" PARTUUID="1885578f-05"


    3) fdisk -l | grep "Disk "


    4) cat /etc/mdadm/mdadm.conf


    5) mdadm --detail --scan --verbose

    Code
    root@myserver:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/myserver:daten level=raid5 num-devices=4 metadata=1.2 name=myserver:daten UUID=c22bdeb3:7fddf2eb:641b3e42:597f1d04
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd


    6) Array details page from OMV5 RAID Management tab says:


    This looks all good to me but the 'ghost device' /deb/sdb1 is still present and marked as missing.
    Any ideas? :whistling:


    @votdev I don't have a Github account but this might be an omv gui bug. Could you please have a look at it? I'm willing to deliver more terminal output if needed.


    /edit: The only notable thing is that mdadm.conf seems to be a bit empty. But I have no idea if this is still the right place to look into. ( @ryecoaaron s posting was meant for OMV1.0.) Did I perhaps miss some important step in my RAID migration?

    • Offizieller Beitrag

    As you can see the 'device' of my Raid5 also changed from /dev/md0 to /dev/md127

    Nothing to worry about, it's assigned a new raid reference.


    and my SWAP partition is listed under File Systems now

    It's the same on my test machine.


    It seems like one of my Raid HDDs is also appearing as Missing File System /dev/sdb1

    That is odd and it must be a first :)


    There are two things you could do;


    1. ssh into omv and run wipefs -n /dev/sdb this will not wipe the drive but it will give you information on the file systems, it may be possible to remove what is causing it.


    2. Remove /dev/sdb from the array, wipe it as if it was new drive and re add it, all that can be done from the GUI.

  • @geaves Thanks for your reply!

    Nothing to worry about, it's assigned a new raid reference.

    I also don't think so but wanted to mention it, because its different to before


    It's the same on my test machine.

    Good to know while I still don't think displaying SWAP under File Systems makes sense because SWAP partition by definition doesn't contain an actual file system.


    That is odd and it must be a first :)
    There are two things you could do;


    1. ssh into omv and run wipefs -n /dev/sdb this will not wipe the drive but it will give you information on the file systems, it may be possible to remove what is causing it.


    2. Remove /dev/sdb from the array, wipe it as if it was new drive and re add it, all that can be done from the GUI.


    First I have to say that one reason for not deleting the RAID and starting over with empty disks was that it is to a large extent filled with 'medium' important data and I at this moment don't have capacity for a full backup. (my real important stuff is of cause properly backed up twice) Losing those terabytes of data would still be a shame but the reacquisition would probably cost the same 200€ that a hard disk in this size would cost. So lets say I'd like to minimize the risk of losing those 200 euros. ;)


    1. I had a look at the manpage of wipefs and the -n option seems to be ambiguous. 8|

    Zitat

    -n, --noheadings Do not print a header line.

    Zitat

    -n, --no-act Causes everything to be done except for the write() call.

    Just to be sure: You meant --no-act, right?


    Anyway, the risk of having any 'old' filesystem signatures on my second drive is near zero. I bought all four drives manufacturer sealed (last year or so) and didn't do any experiments before building the RAID5 array in OMV4 gui.



    2. I'd keep degrading the array as the last option to minimize the risk of dataloss.



    The interesting part is: I didn't manage to display the 'ghost partition' on terminal. So where does the File System Tab of OMV5 gets its data from?
    @votdev Could you (or someone else who knows the code) please point this out? Thanks a lot!

  • @Mr Smile looks as if your last post was in a mod queue as it wasn't there when I replied.

    arrrg mod queue again. :cursing:
    Just wanted to add a correction ... 8|


    /edit: @'votdev' putting previously published posts to mod queue automatically because of adding some tiny changes is annoying for us users and also produces unnecessary work for the mods. Please think about disabling this automatism!

  • Ok you had post I was reading and it just disappeared :)


    But to clarify wipefs -n will nothing other than display the partition and file system information on that drive.


    Here is what wipefs --no-act /dev/sdb said:

    Code
    root@myserver:~# wipefs --no-act /dev/sdb
    DEVICE OFFSET TYPE              UUID                                 LABEL
    sdb    0x1000 linux_raid_member c22bdeb3-7fdd-f2eb-641b-3e42597f1d04 myserver:daten


    Output for /dev/sda, /dev/sdc and /dev/sdd looks EXACTLY the same (except for the device name)!


    So what to do? I still wasn't able to find any signs of a /dev/sdb1 partition/filesystem in terminal ... ?(

    • Offizieller Beitrag

    So what to do. I still wasn't able to find any signs of a /dev/sdb1 partition/filesystem in terminal

    Don't know, does omv5 have/use omv-firstaid from the cli? if it does you could try option 10 -> Clear web control panel cache


    BTW --no-act is the same as -n :) what I was hoping was that it would display something spurious

  • Thanks. Unfortunately clearing the cache didn't help either. :(


    And regarding the option: manpage wipefs also says that --no-headings is the same as -n.
    Its the first time I deliberately notice such an ambiguity. So using -n gives me a 50% chance for both options? :huh::D


    But i found a real trace whats going on here under System Information / Report:



    ================================================================================
    = Static information about the file systems
    ================================================================================
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdb1 during installation
    UUID=ea9995f6-9084-43b3-90d2-17bcb592ed50 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sdb5 during installation
    UUID=0353bed2-8aa8-4524-b817-b9173f3b55a0 none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/ssd /srv/dev-disk-by-label-ssd ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/daten /srv/dev-disk-by-label-daten ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]



    It says that the root file system was on /dev/sdb1 during installation and this is correct! When I installed omv5 I had only two drives connected to the system:
    my small data ssd (label 'ssd') as /dev/sda and the tiny tiny system ssd I installed OMV on (/dev/sdb).
    After setup I attached the RAID drives and they beacme /dev/sda - /dev/sdd. So the other drives were pushed to e and f.


    The reason for having the system drive on the least prominent letter is that it is an NVME ssd living on an expansion card.


    The question is: How can I safely achieve that OMV5 loses his memory? In the file systems tab there is a Delete button but I'm a bit concerned that deleting the phantom /dev/sdb1 could somehow break my real /dev/sdb.
    But when I boot up without RAID again the ghost /dev/sdb1 entry is not there. So what to do now?


    /edit: I detached my RAID drives because many people recommend this here. Does anything speak against reinstalling OMV with ALL 6 drives connected and to be very careful while choosing the right drive for root?
    This way OMV would remember the right 'drive letter' for root from the start.


    @votdev Doesn't matter how I solve this problem for me. This behavior is definitely a bug, that should be fixed somehow (maybe also in OMV4)!

  • /edit: @votdev putting previously published posts to mod queue automatically because of adding some tiny changes is annoying for us users and also produces unnecessary work for the mods. Please think about disabling this automatism!

    Das liegt wohl am Spam-Filer der Forums-Software und wurde hier im Forum schon oft angesprochen. Ist aber offensichtlich nicht zu ändern. Bisher habe ich zwei Ursachen festgestellt, die dazu führen können: Lange Postings und wenn man sofort wieder etwas korrigiert am Text kurz nach dem Posten. Obwohl Letzteres oft auch funktioniert.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    So what to do now?

    This could be above my pay grade :) but sdb1 should not display due to the #, I've looked at my own fstab on 4 and 5 and they show the same information as yours.


    You could do search in /etc/openmediavault/config.xml and search for mntent and see if there is anything in there that points to sdb1, the alternative is just to delete it as it's a partition not a drive and omv's raid uses the drive not partitions on the drive.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!