I can see the array now.
Ok, but I would run some of the commands we have used to confirm, is your data visible.
cat /proc/mdstat
cat /etc/mdadm/mdadm.conf
mdadm --detail /dev/md0
I can see the array now.
Ok, but I would run some of the commands we have used to confirm, is your data visible.
cat /proc/mdstat
cat /etc/mdadm/mdadm.conf
mdadm --detail /dev/md0
It doesn't look like it. I can't access the NAS from my windows or Mac machines. Also checking under the file systems tab the filesystem that was associated to the array just says n/a and missing now.
root@NAS:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
root@NAS:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 spares=1 name=NAS:NASvol1 UUID=9a74f8dd:30a95450:999e44c3:e36af552
# instruct the monitoring daemon where to send mail alerts
MAILADDR zachlow77@gmail.com
MAILFROM root
root@NAS:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Mar 20 15:21:36 2020
Raid Level : raid5
Array Size : 14650675200 (13971.97 GiB 15002.29 GB)
Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri Mar 20 15:21:36 2020
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : NAS:0 (local to host NAS)
UUID : 7f0bd921:87b6ef31:3ee55716:a236e2b4
Events : 0
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 8 80 4 active sync /dev/sdf
5 8 96 5 active sync /dev/sdg
Also checking under the file systems tab the filesystem that was associated to the array just says n/a and missing now.
Is it rebuilding in raid management? the array is active (auto-read-only)
Doesn't look like it raid management state says clean
Doesn't look like it raid management state says clean
That's a start, mdadm --readwrite /dev/md0
Did that....no change
When you say no change does mdstat still show active (auto-read-only)
My apologies. I should've clarified that I meant I still can't see the files.
root@NAS:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices: <none>
My apologies. I should've clarified that I meant I still can't see the files.
So it shows as clean in raid management but under file system n/a
Exactly.
If you select the array in file systems can you mount it from the menu.
Nope. Also the array isn't listed under device column in the file system tab
Also the array isn't listed under device column in the file system tab
After all this it could be toast
blkid fdisk -l | grep "Disk " we need to see a file system on one if not both of those
root@NAS:~# blkid
/dev/sda1: UUID="f7394bdd-cf50-47f0-99c1-58780a3d5c86" TYPE="ext4"
/dev/sda5: UUID="82b05653-a32e-44e3-a814-917fbcae3d43" TYPE="swap"
/dev/sdc: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="cd45af24-7420-7d67-f3cb-bdec3a2844e8" LABEL="NAS:0" TYPE="linux_raid_member"
/dev/sdd: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="064a19ad-5e0b-cf6b-6643-a01f2da7cf41" LABEL="NAS:0" TYPE="linux_raid_member"
/dev/sdf: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="feeeeaf2-0fb2-6550-af21-d01d9dea16bc" LABEL="NAS:0" TYPE="linux_raid_member"
/dev/sdb: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="96f05638-df70-dbc9-68f1-0575bae0adb9" LABEL="NAS:0" TYPE="linux_raid_member"
/dev/sde: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="283b08f3-52c2-b1ed-a1e4-782e6f66c6ec" LABEL="NAS:0" TYPE="linux_raid_member"
/dev/sdg: UUID="7f0bd921-87b6-ef31-3ee5-5716a236e2b4" UUID_SUB="48f2fef4-687d-d6d4-4e58-de48c5f85843" LABEL="NAS:0" TYPE="linux_raid_member"
root@NAS:~# fdisk -l | grep "Disk "
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/sda: 500.1 GB, 500107862016 bytes
Disk identifier: 0x0007c7b6
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
Disk identifier: 0x00000000
Disk /dev/md0: 15002.3 GB, 15002291404800 bytes
Disk identifier: 0x00000000
That output confirms your post 5 it can't find any partition/file system information.
I'm going to have to sign off getting late here but what's the output of these two
wipefs -n /dev/md0 wipefs -n /dev/sdb
root@NAS:~# wipefs -n /dev/md0
root@NAS:~# wipefs -n /dev/sdb
offset type
----------------------------------------------------------------
0x1000 linux_raid_member [raid]
LABEL: NAS:0
UUID: 7f0bd921-87b6-ef31-3ee5-5716a236e2b4
No file system!! WTF if that's the case something fried this array before we started.
Try fsck /dev/md0 if you get any requests to fix answer y if get nothing then there's nothing left to do.
Tried it got nothing. Looks like its toast.
Yep, I'm sorry should have realised that on the first page, this will mean a complete rebuild either with 4 or 5, check the drives before you start re using any of them.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!