Power Outage and Now File system is Missing

  • Hello,


    I had a power outage and now my file system says missing. I have backups of the unreplaceable data...but I don't want to go through the hassle of rebuilding all of the dockers if I dont have to. Is there something I can do to Mount or make


    root@doghouse:/# cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sda[3](S) sdc[4](S) sdg[5](S) sdh[6](S) sde[0](S) sdd[1](S) sdf[2](S)

    95705763328 blocks super 1.2


    unused devices: <none>

    root@doghouse:/# blkid

    /dev/sdb1: UUID="F5CC-C001" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="2f73bb84-b234-4114-b730-fad1f9be1d7f"

    /dev/sdb2: UUID="5631b6e1-db65-4d46-a8d0-1ed14af3b997" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2922ecc3-f1d7-4b85-a41f-2e10751e20e0"

    /dev/sdb3: UUID="22e0f47b-39f8-457e-b9ac-eefe41fd6cd9" TYPE="swap" PARTUUID="80f925dc-a85f-4825-bd5b-42df66c717a1"

    /dev/sdf: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="62e144e4-ac09-fde3-2db1-2c10da13644e" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sde: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="08aee2a4-2459-b7cb-af4c-c1f87dfbc53f" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sda: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="8ee512b9-bd7d-b6b0-da58-a016323279b8" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdc: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="11627149-214e-8faf-9c0f-8fdc94218b47" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdd: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="0ae74afc-69b0-6967-7bbb-f60f22d53861" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdh: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="2b255334-6d60-3a7b-dc39-d4f1fde63918" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdg: UUID="48006a0c-b8db-73b6-78e2-d6fd9d123d27" UUID_SUB="d0fff65c-cce2-0149-bf3e-56be3d04eced" LABEL="openmediavault:0" TYPE="linux_raid_member"

    root@doghouse:/# fdisk -l | grep "Disk "

    Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: WDC WD1001FALS-4

    Disk identifier: E5D6E650-EF6A-4075-8099-1516496F692F

    Disk /dev/sdf: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: ST14000DM001-2JC

    Disk /dev/sde: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: ST14000DM001-2JC

    Disk /dev/sda: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: ST14000DM001-2JC

    Disk /dev/sdc: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: ST14000DM001-2JC

    Disk /dev/sdd: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: ST14000DM001-2JC

    Disk /dev/sdh: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: WDC WUH721414AL

    Disk /dev/sdg: 12.73 TiB, 14000519643136 bytes, 27344764928 sectors

    Disk model: WDC WUH721414AL

    root@doghouse:/# cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>


    # definitions of existing MD arrays

    ARRAY /dev/md0 metadata=1.2 name=openmediavault:0 UUID=48006a0c:b8db73b6:78e2d6fd:9d123d27

    root@doghouse:/# mdadm --detail --scan --verbose

    INACTIVE-ARRAY /dev/md0 num-devices=7 metadata=1.2 name=openmediavault:0 UUID=48006a0c:b8db73b6:78e2d6fd:9d123d27

    devices=/dev/sda,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh

    root@doghouse:/# mount md0

    mount: md0: can't find in /etc/fstab.

    root@doghouse:/# fsck

    fsck from util-linux 2.36.1

    e2fsck 1.46.6 (1-Feb-2023)

    /dev/sdb2 is mounted.

    e2fsck: Cannot continue, aborting.



    root@doghouse:/# fsck md0

    fsck from util-linux 2.36.1

    Usage: fsck.ext4 [-panyrcdfktvDFV] [-b superblock] [-B blocksize]

    [-l|-L bad_blocks_file] [-C fd] [-j external_journal]

    [-E extended-options] [-z undo_file] device


    Emergency help:

    -p Automatic repair (no questions)

    -n Make no changes to the filesystem

    -y Assume "yes" to all questions

    -c Check for bad blocks and add them to the badblock list

    -f Force checking even if filesystem is marked clean

    -v Be verbose

    -b superblock Use alternative superblock

    -B blocksize Force blocksize when looking for superblock

    -j external_journal Set location of the external journal

    -l bad_blocks_file Add to badblocks list

    -L bad_blocks_file Set badblocks list

    -z undo_file Create an undo file

    root@doghouse:/#



  • macom

    Approved the thread.
  • Code
    root@doghouse:/# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0
    root@doghouse:/# mdadm --assemble --force --verbose /dev/md0 /dev/sd[bc]
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sdb
    mdadm: /dev/sdb has no superblock - assembly aborted
    root@doghouse:/#

    sdb is my "system drive" and not part of the RAID that the file system is supposed to be on.

    confused and barely know what I am doing. I have had it running for over a year and things have been great. Don't know what went wrong here.


    should my command look like this since my drives are those letters in RAID md0?

    Code
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[acghedf]
  • Ok...after running that it shows online...but not mounted or something.



  • eubyfied The RAID volume label "md0" is somewhat ubiquitous among MDADM users but it looks like your previously existing RAID was labeled "openmediavault" (per your first post). One of the two following may be the case:


    1) The newly mounted RAID array "md0" could be added to OMV by returning to "Storage" => "Software RAID" and mounting it as a new container

    or

    2) Unmount the RAID and repeat your previous steps substituting the portions of "md0" with "openmediavault"


    Hope this helps!

    • New
    • Official Post

    after running that it shows online

    It should display as clean/degraded in raid management

    but not mounted or something

    When an mdadm array becomes inactive and has to be reassembled it doesn't always mount automatically, either reboot where omv will pick the array from fstab, or storage -> file systems click the 'play' icon' -> mount an existing file system, from the drop down, the array should be available. Select it and click save

  • when I run


    Code
    mdadm --assemble --force --verbose /dev/md/openmediavault /dev/sd[acghedf] 

    it says it is clean, degraded in the RAID section.


    In file system is says online, but when I select the Play button to mount it, it won't show me a file system.

    The Select a File System won't ever show anything.


    do I have to replace the drive that is showing as Bad (sdd) and then tell it to rebuild?

  • do I have to replace the drive that is showing as Bad (sdd) and then tell it to rebuild?

    The array should still be accessible even in a "clean, degraded" state... After mounting have you created new 'Shared Folders' linking back to the newly re-mounted file system, or verified that existing shared folders are still pointing to the correct one?

  • until I pulled the bad harddrive out of the system...nothing would load. I pulled the drive, rebooted OMV and it did what it needed to do and everything is back where it is supposed to be.

    thanks for everyones help.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!