Cannot mound raid after reinstall
-
-
That is what I was looking for, there had to be an error somewhere!!
I am about to sign off, try running fsck /dev/md127 if it finds any errors it will ask if you want to fix them, lets hope that sorts it out.
-
Code
Alles anzeigenfsck from util-linux 2.33.1 e2fsck 1.44.5 (15-Dec-2018) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/md127 The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device>
Here is the output
-
BTW looking back at your first post it should not take two days to sync 3x3TB drives, so prepare for the worst!!
-
mdadm --examine /dev/md127
mdadm --examine /dev/sdb
mdadm --examine /dev/sdc
mdadm --examine /dev/sdd
EDIT: you are currently in deep doo-doo whilst I commend you for trying to sort this out yourself as per your first post, your actions may result in complete data loss, hence my sig. I can therefore safely assume you do not have a backup first failing of home users
-
mdadm --examine /dev/md127
mdadm --examine /dev/sdb
Code
Alles anzeigen/dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 924d38d8:fecce75e:654f9379:b1a9e7d1 Name : Serveur:NASData Creation Time : Sun Aug 2 22:41:06 2020 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268976 (2794.39 GiB 3000.46 GB) Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=944 sectors State : clean Device UUID : 9a2acb5c:117766fc:1d39adef:f118dffa Internal Bitmap : 8 sectors from superblock Update Time : Thu Aug 6 00:12:47 2020 Bad Block Log : 512 entries available at offset 24 sectors Checksum : de15a34d - correct Events : 10329 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdc
Code
Alles anzeigen/dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 924d38d8:fecce75e:654f9379:b1a9e7d1 Name : Serveur:NASData Creation Time : Sun Aug 2 22:41:06 2020 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268976 (2794.39 GiB 3000.46 GB) Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=944 sectors State : clean Device UUID : 56138d00:7fe4a6c9:9b302768:62295417 Internal Bitmap : 8 sectors from superblock Update Time : Thu Aug 6 00:12:47 2020 Bad Block Log : 512 entries available at offset 24 sectors Checksum : e407028d - correct Events : 10329 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
mdadm --examine /dev/sdd
Code
Alles anzeigen/dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 924d38d8:fecce75e:654f9379:b1a9e7d1 Name : Serveur:NASData Creation Time : Sun Aug 2 22:41:06 2020 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 5860268976 (2794.39 GiB 3000.46 GB) Array Size : 5860268032 (5588.79 GiB 6000.91 GB) Used Dev Size : 5860268032 (2794.39 GiB 3000.46 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=944 sectors State : clean Device UUID : bf1f4d56:adc6dd5e:c32a70fa:08966f09 Internal Bitmap : 8 sectors from superblock Update Time : Thu Aug 6 00:12:47 2020 Bad Block Log : 512 entries available at offset 24 sectors Checksum : 53625894 - correct Events : 10329 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
-
EDIT: you are currently in deep doo-doo whilst I commend you for trying to sort this out yourself as per your first post, your actions may result in complete data loss, hence my sig. I can therefore safely assume you do not have a backup first failing of home users
Actually the most I wanted to get back was photos and videos but most of them are also saved with google photos and/or amazon photos so I kind of have a backup, just not for all the rest which is less critical. It is just sad if I lose some photos that for some reason are not in google photos or amazon.
But if I can recover the RAID and backup everything then it will be more confortable for me to know that I get back everything I had, that's why I really want to try everything to make it
-
OK, just got back in from shopping, need refreshment and confirm an idea, as the output from the three drives appears to be ok.
BTW if we try this it's referred to as 'sh!t or bust' the drives are fine (or they appear to be) the raid itself is f*ed
-
Haha no worries, as I told you my main concerned is about photos that are on it but most of them are still on Google/Amazon storage except some when I forgot to get the app on my phone but well that's my fault
Also Nextcloud did some sync with 1 or 2 computer for some other files now that I'm thinking about it
-
OK I had to find a thread I replied to,
mdadm --stop /dev/md127 wait for the confirmation that it has stopped before proceeding
mdadm --create --assume-clean --level=5 --raid-devices=3 /dev/md0 /dev/sdb /dev/sdc /dev/sdd if this works then run
cat /proc/mdstat
-
mdadm --stop /dev/md127
mdadm --create --assume-clean --level=5 --raid-devices=3 /dev/md0 /dev/sdb /dev/sdc /dev/sdd
Codemdadm: /dev/sdb appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020 mdadm: /dev/sdc appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020 mdadm: /dev/sdd appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020
cat /proc/mdstat
-
let's try this,
mdadm --create --assume-clean --level=5 --raid-devices=3 /dev/md127 /dev/sdb /dev/sdc /dev/sdd
-
I repeated it and now it made something :
mdadm --create --assume-clean --level=5 --raid-devices=3 /dev/md0 /dev/sdb /dev/sdc /dev/sdd
Codemdadm: /dev/sdb appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020 mdadm: /dev/sdc appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020 mdadm: /dev/sdd appears to be part of a raid array: level=raid5 devices=3 ctime=Sun Aug 2 22:41:06 2020 Continue creating array? y Continue creating array? (y/n) y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
cat /proc/mdstat
-
That output is what I was expecting, does it appear in Raid Management?
-
Yes it is in OMV RAID management. Still nothing in FileSystem
-
let's see if it will mount as before, mount /dev/md0 /srv
-
-
Then there's a file system issue, so,
wipefs -n /dev/md0
wipefs -n /dev/sdb
wipefs -n /dev/sdc
wipefs -n /dev/sdd
that will not wipe anything but will give some information
-
wipefs -n /dev/md0
Nothing showed up
wipefs -n /dev/sdb
CodeDEVICE OFFSET TYPE UUID LABEL sdb 0x1000 linux_raid_member 47a5cb5d-c1aa-d89d-c08c-1e2e811e54c2 Serveur.lan:0
wipefs -n /dev/sdc
CodeDEVICE OFFSET TYPE UUID LABEL sdc 0x1000 linux_raid_member 47a5cb5d-c1aa-d89d-c08c-1e2e811e54c2 Serveur.lan:0
wipefs -n /dev/sdd
-
This is going down the doo-doo what's the output of blkid
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!