File system suddenly disappeared

  • Guys suddenly while streaming a movie everything stopped, the file system as disappeared, the HDs still showing on the DISK section and on the DEVICES ..

    But not showing on the SOFTWARE RAID even if trying to CREATE .

    In file system it shows as MISSING.

    Any help would be appreciated to understand what is happening and how to solve it please.

    Omv6 6.4.7-1

    Nov 15 21:51:26 omv6 monit[1235]: 'mountpoint_srv_dev-disk-by-uuid-3E3ECF883ECF37A3' status failed (1) -- /srv/dev-disk-by-uuid-3E3ECF883ECF37A3 is not a mountpoint

    Nov 15 21:51:26 omv6 monit[1235]: 'mountpoint_srv_dev-disk-by-uuid-3E3ECF883ECF37A3' status failed (1) -- /srv/dev-disk-by-uuid-3E3ECF883ECF37A3 is not a mountpoint



    I have managed to unmount the drives that was showing missing....but when trying to mount them again they will not show on the mount window....but if I choose CREATE new file system all 3x hdds will show 😞, have I lost all my data then as I guess will wipe everything if I choose to CREATE a new file system?

    How can 3x HDDs loose their file system at the same time? could the raid card be the problem?


    Thank you

  • How can 3x HDDs loose their file system at the same time? could the raid card be the problem?

    What do you mean by RAID?

    The drives have different FS on them: ext4 and NTFS(bad but yeah...)


    post each output inside CODE boxes (not pictures) of:

    blkid

    lsblk

    cat /etc/fstab

    sudo omv-showkey fstab

    sudo omv-showkey mntent

    mount -a

  • is been RAID 5 for the past 2 years with 3xhhds with ext4 file system, suddenly everthing went wrong while watching a movie using EMBY. i was surpriced to see NTFS on the File Systems window...like WHATTTT?

    as requested.....thanx

    Code
    root@omv6:~# blkid
    /dev/sdd1: UUID="0CAB-3F74" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="40269dac-cd9c-4cc5-b2b2-162d4cde2a03"
    /dev/sdd2: UUID="08741bf8-2513-4bbd-b6bb-ed02780efe40" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0c824677-61af-4e75-b777-55744d9904f9"
    /dev/sdd3: UUID="fb7d6bb7-0d3e-4cb2-b18a-0c350a82070e" TYPE="swap" PARTUUID="dd77bc96-405d-438f-8927-6f54f7df4181"
    /dev/sdc: UUID="d5ebc81d-a902-7e4a-54da-4f64ecfefb6d" UUID_SUB="426e5355-fce1-6172-7aa2-3c447450809e" LABEL="openmediavault.local:myraid" TYPE="linux_raid_member"
    /dev/sda: UUID="d5ebc81d-a902-7e4a-54da-4f64ecfefb6d" UUID_SUB="57cd1ef8-85b1-054a-777c-7b9f251045d6" LABEL="openmediavault.local:myraid" TYPE="linux_raid_member"
    /dev/sdb1: PARTUUID="04836865-407d-4f6c-943e-762d5969fd14"
    root@omv6:~#
  • You copy/pasted the commands wrong.


    Just write, lsblk on the CLI. No break line.


    Same with mount -a

    • Offizieller Beitrag

    could the raid card be the problem?

    It could be. Please give us hardware details, where are the hard drives connected?

  • sorry ..learning,

    mount -a does nothing

    You copy/pasted the commands wrong.


    Just write, lsblk on the CLI. No break line.


    Same with mount -a

    Code
    root@omv6:~# mount -a
    root@omv6:~#
  • Meguinness

    According the outputs, one of the drives is simply not showing as a linux raid member (sdb) since it's showing a partition.

    sda && sdc are OK (as for RAID ident goes)


    As chente said, hardware details are needed and also, how was the RAID created since neither showkey command nor fstab shows any reference to it.


    Since this is RAID, I'll ask geaves for some input but you can start by posting (in CODE boxes) the outputs of:

    • cat /proc/mdstat
    • fdisk -l | grep "Disk "
    • cat /etc/mdadm/mdadm.conf
    • mdadm --detail --scan --verbose


    Degraded or missing raid array questions - RAID - openmediavault

  • thanx ..as requested

    RAID was created using the raid setup in OMV6 on a different system but same raid card,i then build new NAS and moved hdds across and mount them and the RAID was automaticaly done if i remember correctly , i only had a hdd failing 6 months ago and replaced with a new hdd by formating it to ext4 ..then mount it and added to the RAID.

    the 3x drives are attached to a Fujitsu D2607 IT P20 raid card, i added a spare hhd to check if i was able to create a file system and then mount ...and it worked ok so i belive the card is working

    Code
    root@omv6:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sda[3](S) sdc[1](S)
          11720784048 blocks super 1.2
    
    unused devices: <none>
    Code
    root@omv6:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=openmediavault.local:myraid UUID=d5ebc81d:a9027e4a:54da4f64:ecfefb6d
       devices=/dev/sda,/dev/sdc
    root@omv6:~#
  • Meguinness

    Don't do anything else until someone ( geaves , sorry for constantly poking you) give you other instructions.


    I reached my knowledge on this matter.

    • Offizieller Beitrag

    The array is inactive,


    mdadm --stop /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[ac]


    this should rebuild the array in a clean/degraded state, what's odd here is that blkid does not see /dev/sdb and fstab shows no reference to an array and it appears some of the output is missing


    Initially resync the array when it's finished reboot and come back

  • hi, i have inserted those commands as you requested geaves ..no luck...,what next ,thank you

    Code
    root@omv6:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    root@omv6:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[ac]
    mdadm: looking for devices for /dev/md127
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted
    root@omv6:~#
    • Offizieller Beitrag

    Nothing the array appears to be toast, so, assuming the 3 x 6TB drives are/were part of the array post the output of the following for each drive;


    mdadm --examine /dev/sd? replace the ? with each drives reference e.g. mdadm --examine /dev/sda etc

  • Nothing the array appears to be toast, so, assuming the 3 x 6TB drives are/were part of the array post the output of the following for each drive;


    mdadm --examine /dev/sd? replace the ? with each drives reference e.g. mdadm --examine /dev/sda etc

    here you go as requested, thanx geaves

    • Offizieller Beitrag

    This still makes no sense!! The --examine output says there should be 4 devices in that Raid5 and you've run the --examine on /sdb and /sdd twice!!


    Do not shutdown or restart the machine the drive references can change, post the output again of the following;


    fdisk -l | grep "Disk "


    blkid


    if the system fails to locate at least 3 of those drives in the array and mdadm fails to rebuild the array with 3 of the 4 drives in the array, the array is toast, Raid5 allows for ONE drive failure only

  • geaves

    i have 3x 6tb hdds for data and 1x ssd for the OS.

    i only replaced 1x hdd after failing 4 month ago and added it in to the RAID5. thanx

    • Offizieller Beitrag

    OK I don't know why you run mdadm --examine again, but let me clarify;


    this line -> Raid Devices : 4 from mdadm --examine is looking for/wanting 4 drives within that array,


    blkid can only find 2


    fdisk and your image can find 3 x 6TB, so unless those drives can assemble without error in a clean/degraded state, there's nothing to be done


    post the output of mdadm --detail /dev/md127

  • here it is

    • Offizieller Beitrag

    That's the output I expected, there's nothing to be done, the output from --examine wants 4 devices you only have 2 that mdadm recognises as being part of an array, the array could be assembled with 3 out of 4 drives as Raid5 allows one drive failure

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!