Hi all,
after an OMV upgrade, i rebooted the system and it told me:
Code
mdadm: failed to RUN_ARRAY /dev/md/nas: Input/output error
mdadm: Not enough devices to start the array
Then the system boots in emergency mode.
So i logged in as root, and then launched the command you suggest to have support.
These are the outputs:
Code
blkid
/dev/sdb: UUID="5f6571a4-a263-eedd-dff5-095ff168f1c8" UUID_SUB="112d432f-407a-49ec-540c-df29b8afd5c8" LabeL="aki-nas:nas" TYPE="linux_raid_member"
/dev/sde1: UUID="02693183-18e7-4c28-84c0-c440b98709aa" TYPE="ext4" PARTUUID="0001b6da-01"
/dev/sde5: UUID="bde7095f-febb-4139-97eb-333e985d7948" TYPE= "swap" PARTUUID="0001b6da-05"
/dev/sdc: UUID="5f6571a4-a263-eedd-dff5-095ff168f1c8" UUID_SUB="338836c6-6ea6-2cf2-f34e-9ce0dcb1cc59" LABEL="aki-nas:nas" TYPE="linux_raid_member"
/dev/sda: UUID="5f6571a4-a263-eedd-dff5-095ff168f1c8" UUID_SUB="0c44c88e-21f8-b5ac-790f-d86bbe70431d" lABEL="aki-nas:nas" TYPE="linux_raid_member"
/dev/sdd: UUID="5f6571a4-a263-eedd-dff5-095ff168f1c8" UUID_SUB="3cf495b7-e89c-9341-68ef-e3979f166652" LABEL="aki-nas:nas" TYPE="linux_raid_member"
Code
fdisk-l | grepdisk "Disk "
The primary GPT table is corrupt, but the backup appears OK, so that will be used
Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sde: 298,1 GiB, 320072933376 bytes, 625142448 sectors
Disk identifier: Ox0001b6da
Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk identifier: F6F9D2E7-6753-495C-81EC-9268ED48774D
Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Code
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf (5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner =root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRaY /dev/md/nas metadata=l.2 name=aki-nas:nas UUID=5f6571a4:a263eedd:dff5095f:f168f1c8
MAILADDR root
Alles anzeigen
The nas has 4 2TB drives (Seagate) and the array (raid 5) stops working after a system reboot (post upgraded OMV).
I absolutely don't know what to do, please help me, because I have all my photos and datas on that nas.
Thank you