"Missing" RAID filesystem

  • I've misunderstand your previous question. Take a look:


  • Is it advisable to down grade to an earlier kernel as the issue started with 4.13?

    This should workaround the problem according to those bug reports that took a deeper look. But of course it would be better if the problem would be identified and solved with recent kernel versions so the fix can be backported to 4.13 later.


    Threads/reports to watch:

    Besides that I've no idea whether a workaround in OMV is possible (not relying on blkid output -- but if I understood the bug reports correctly the problem is more severe than just cosmetical issues with one tool)

  • Another victim of WD20EFRX and kernel 4.14 here. My RAID1 is becoming invisible in the File Systems menu.


    Edit: I've downgraded the kernel to 4.9. I'm still having the issue.

  • Just upgraded last night from a stable 3.x build (running for a LONG time with no issues) to 4.1.0.1).


    As others have stated - my md0 partition is not mounted or recognized.


    It is comprised of Qty 4 - WDC-WD30EFRX-68E drives.


    Output of udevadm info --query=property --name=/dev/md0


    DEVLINKS=/dev/disk/by-id/md-uuid-de9ec887:074a453e:f6ad3e13:8e8d12a6 /dev/disk/by-id/md-name-OMV2:HULK
    DEVNAME=/dev/md0
    DEVPATH=/devices/virtual/block/md0
    DEVTYPE=disk
    MAJOR=9
    MD_DEVICES=4
    MD_DEVICE_sdb_DEV=/dev/sdb
    MD_DEVICE_sdb_ROLE=0
    MD_DEVICE_sdc_DEV=/dev/sdc
    MD_DEVICE_sdc_ROLE=1
    MD_DEVICE_sdd_DEV=/dev/sdd
    MD_DEVICE_sdd_ROLE=2
    MD_DEVICE_sde_DEV=/dev/sde
    MD_DEVICE_sde_ROLE=3
    MD_LEVEL=raid5
    MD_METADATA=1.2
    MD_NAME=OMV2:HULK
    MD_UUID=de9ec887:074a453e:f6ad3e13:8e8d12a6
    MINOR=0
    SUBSYSTEM=block
    SYSTEMD_WANTS=mdmonitor.service
    TAGS=:systemd:
    USEC_INITIALIZED=6271534



    output of lsblk


    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 37.3G 0 disk
    |-sda1 8:1 0 35.7G 0 part /
    |-sda2 8:2 0 1K 0 part
    `-sda5 8:5 0 1.6G 0 part [SWAP]
    sdb 8:16 0 2.7T 0 disk
    `-md0 9:0 0 8.2T 0 raid5
    sdc 8:32 0 2.7T 0 disk
    `-md0 9:0 0 8.2T 0 raid5
    sdd 8:48 0 2.7T 0 disk
    `-md0 9:0 0 8.2T 0 raid5
    sde 8:64 0 2.7T 0 disk
    `-md0 9:0 0 8.2T 0 raid5



    Has anyone come up with a fix or this???


    The information appears to still be there - and I do not want to risk losing everything - - any thoughts?


    results from "fsck /dev/md0"
    fsck from util-linux 2.29.2
    e2fsck 1.43.4 (31-Jan-2017)
    HULK: clean, 448820/274702336 files, 554503719/2197601280 blocks


    TIA


    George.

  • If I reinstalled 3.x from scratch - do you think it would re recognize the old raid array (md0)? Or does anyone have any thoughts how to fix this??? Really getting desperate to get my files back.


    Thanks


    George

    • Offizieller Beitrag

    You have a backup right? I would try reinstall with the drives in place. Be sure to let the install reboot on it's own, don't force it. Seems there are scripts running that if they don't finish can cause random problems.


    If you don't have space to do a backup and it is raid 1, you should be able to do the install with only one drive in the machine. Once it is up and working you can add the other drive back.


    Good luck

  • It is Raid 5 with 4 drives. I do have a backup (I have this sync to another NAS for most of the files - but not all - - I know, bad planning on my part)


    I reinstalled with version 3.x - Array was still there - re-created SMB shares and added them as shared folders - and can access everything.


    Thanks

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!