Filesystem Mountpoint Missing

  • Hi all.


    Looking for some help...maybe I made a noob mistake, but one of my file systems is showing up as Missing in the Filesystem section of the web interface. If I use the DF command in SSH, its missing too..


    The files system is made up of 3 drives in a RAID array. All three drives are showing us as present in the web interface.


    I dont have the option to mount the filesystem in the web interface.


    The missing file system is called /srv/dev-disk-by-label-Storage.


    Not sure ifs relevant, but in the syslogs which Ive pasted here, it does say :


    Lookup for '/srv/dev-disk-by-label-Storage' filesystem failed -- not found in /proc/self/mounts


    Here are the contents of /proc/self/mount.


    And here are the results of omv-salt deploy run fstab


    Can anyone help?

  • KM0201

    Hat das Thema freigeschaltet.
  • Code
    root@omv-server:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdd[1](S) sdc[2](S) sdb[0](S)
          17581174536 blocks super 1.2
           
    unused devices: <none>
    Code
    root@omv-server:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
    Disk model: WDC WD6001FSYZ-0
    Disk /dev/sda: 119.2 GiB, 128035676160 bytes, 250069680 sectors
    Disk model: LITEONIT LCS-128
    Disk identifier: 0x9685734b
    Disk /dev/sdd: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
    Disk model: WDC WD6001FSYZ-0
    Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
    Disk model: WDC WD6001FSYZ-0
    Code
    root@omv-server:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=pool-ipv6-pd-omv:Storage UUID=7686503f:2d11c7ae:48f9f41b:39240c72
       devices=/dev/sdb,/dev/sdc,/dev/sdd

    Post type of drives and quantity being used as well


    3 Three hard drives for storage (RAID) WD 6TB 3.5" Re 7200 RPM SATA III 128 MB Cache Bulk/OEM Enterprise Hard Drive (WD6001FSYZ)


    1 SSD for the OS (one partition) and for storage of downloads before moving to RAID (second partition)


    Post what happened for the array to stop working? Reboot? Power loss?I


    was trying to free up disk space on OS partition as I couldnt log into WEB Interface. I ran the commands:Here are the commands run.

  • This -> rootfs' space usage 86.3% matches resource limit [space usage > 85.0%] is also a problem and is probably linked to docker, + what hardware are you using.

    Thanks, yes something happened a few weeks ago where i had a power cut and my OS drive filled almost overnight. I was at about 50%, but then it jumped up to 100%. Using docker prune commands I got it down to the 86.3% so I could log in, but then the issues occured with RAID.


    I am using a Dell Precision work station with an i5 processor, 4 GB RAM,


    3 Three hard drives for storage (RAID) WD 6TB 3.5" Re 7200 RPM SATA III 128 MB Cache Bulk/OEM Enterprise Hard Drive (WD6001FSYZ)


    1 SSD for the OS (one partition) and for storage of downloads before moving to RAID (second partition)

    • Offizieller Beitrag

    I am using a Dell Precision work station with an i5 processor, 4 GB RAM,

    :thumbup:


    The Raid is fine it's just displaying as inactive, post the output from blkid and mdadm --detail /dev/md127


    Was this a clean install of OMV5 or it an upgrade from 4?

  • :thumbup:


    The Raid is fine it's just displaying as inactive, post the output from blkid and mdadm --detail /dev/md127


    Was this a clean install of OMV5 or it an upgrade from 4?

    Code
    root@omv-server:~# blkid
    /dev/sdb: UUID="7686503f-2d11-c7ae-48f9-f41b39240c72" UUID_SUB="177c655c-90be-6e57-305d-a6e1477e4c24" LABEL="pool-ipv6-pd-omv:Storage" TYPE="linux_raid_member"
    /dev/sda1: UUID="b470e674-d414-4ae1-b471-7c40a7b544a1" TYPE="ext4" PARTUUID="9685734b-01"
    /dev/sda3: UUID="3a9d6220-5d48-4efd-9b36-a0996bb194dd" TYPE="ext4" PARTUUID="9685734b-03"
    /dev/sda5: UUID="b78cf05b-c6c7-4a13-b2e3-1ceb42b3689d" TYPE="swap" PARTUUID="9685734b-05"
    /dev/sdd: UUID="7686503f-2d11-c7ae-48f9-f41b39240c72" UUID_SUB="a675c434-10e8-4ed7-015d-c6e3341c19b7" LABEL="pool-ipv6-pd-omv:Storage" TYPE="linux_raid_member"
    /dev/sdc: UUID="7686503f-2d11-c7ae-48f9-f41b39240c72" UUID_SUB="648cee8c-3f1e-fe3e-ff96-36149a8eae9d" LABEL="pool-ipv6-pd-omv:Storage" TYPE="linux_raid_member

    Sorry, I ran blkid in my last message, but forgot to post.


    This was a clean install.

    • Offizieller Beitrag

    This was a clean install.

    Interesting as there is no reference to the Raid in mdadm conf, getting it running then sort that out.


    mdadm --stop /dev/md127


    mdadm --assemble --force -verbose /dev/md127 /dev/sd[bcd]


    That should bring the raid back up, you'll have to wait for the rebuild.

  • I ran this, and in Filesystems, in the web interface i saw the new filesystem, /dev/127/. It was showing as normal.

    If I do a reboot, it goes back to the old storage filesystem.


    I ran the commands again, and md127 shows as online again. I have the option to ount, but when I do that I get the below:


    Thanks for your ongoing help and patience!


    • Offizieller Beitrag

    I ran this, and in Filesystems, in the web interface i saw the new filesystem, /dev/127/. It was showing as normal.

    If I do a reboot, it goes back to the old storage filesystem.

    :cursing: why reboot? run those two commands and report back any errors, as I said in my previous post mdadm.conf contains no reference to the Raid and it should.


    This -> mdadm: added /dev/sdd to /dev/md127 as 1 (possibly out of date) needs correcting, and now an issue trying to mount after the reboot due to this -> cannot mount; probably corrupted filesystem on /dev/md127


    Do you have a backup of the data?

  • I rebooted because I am an idiot and didnt understand what I was doing :(


    I hadnt seen the error, and when when it showed as online in the file system, I thought everything was OK. 100% my fault. Sorry geaves...hard to help someone that shoots themselves in the foot


    I dont easily have a backup of data, but as a last resort can get it, would just take time.

  • What's the output of cat /proc/mdstat and mdadm --detail /dev/md127


    TBH I'm not sure how to proceed with this, due to the out of date error and the filesystem error, what to fix first.

    Code
    root@omv-server:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active (auto-read-only) raid5 sdb[0] sdc[2]
          11720782848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
          bitmap: 18/44 pages [72KB], 65536KB chunk
    
    unused devices: <none>

    Whatever you can do to help is great. At this point I am assuming the data is gone, so you cant do any more damage than me, and I wont be blaming you for anything.

    • Offizieller Beitrag

    At this point I am assuming the data is gone

    Not necessarily, we live in hope :) just do not reboot, do not pass go, no get out of jail free card :)


    See now we have another issue -> md127 : active (auto-read-only) raid5 sdb[0] sdc[2] it appears that sbd has been removed.


    mdadm --readwrite /dev/md127 then post the output again of cat /proc/mdstat

  • Not necessarily, we live in hope :) just do not reboot, do not pass go, no get out of jail free card :)


    See now we have another issue -> md127 : active (auto-read-only) raid5 sdb[0] sdc[2] it appears that sbd has been removed.


    mdadm --readwrite /dev/md127 then post the output again of cat /proc/mdstat

    Code
    root@omv-server:~# mdadm --readwrite /dev/md127
    root@omv-server:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid5 sdb[0] sdc[2]
          11720782848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
          bitmap: 18/44 pages [72KB], 65536KB chunk
    
    unused devices: <none>

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    I must not reboot

    • Offizieller Beitrag

    :P Ok that raid is now active with two out of the three drives.


    Storage -> Disks select the drive that's been removed sbd, click wipe on the menu, short, -> OK that will wipe the drive and make it available, when finished;


    Raid Management -> select the raid -> Recover on the menu, the dialog should display that wiped drive, select it -> OK that should add the drive back to the raid and the raid should display as rebuilding -> WAIT until it's completed in the GUI then post cat /proc/mdstat blkid mdadm --detail /dev/md127 cat /etc/mdadm/mdadm.conf cat /etc/fstab

  • :P Ok that raid is now active with two out of the three drives.


    Storage -> Disks select the drive that's been removed sbd, click wipe on the menu, short, -> OK that will wipe the drive and make it available, when finished;


    Raid Management -> select the raid -> Recover on the menu, the dialog should display that wiped drive, select it -> OK that should add the drive back to the raid and the raid should display as rebuilding -> WAIT until it's completed in the GUI then post cat /proc/mdstat blkid mdadm --detail /dev/md127 cat /etc/mdadm/mdadm.conf cat /etc/fstab

    Ok, rebuilding now, but when I click the apply button at the top of the window, it gives me an error (below). I rebooted a few times to try and fix it :P


    Obviously i did nothing except paste error here. Rebuild percent is slowly going up.


    Do I need to do anything?


    • Offizieller Beitrag

    Do I need to do anything

    :/ the error could be attributed to the fact that it's not completed as the apply is doing update initramfs which you would do after the raid has finished rebuilding.

    Rebuild percent is slowly going up

    That's difficult to quantify as the drives are 6Tb, it should display time remaining.

  • :/ the error could be attributed to the fact that it's not completed as the apply is doing update initramfs which you would do after the raid has finished rebuilding.

    That's difficult to quantify as the drives are 6Tb, it should display time remaining.

    Ya, its saying about 700 minutes left. So when thats done I apply the changes?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!