Your array's filesystem is not showing up in blkid. This is why udev is not populating the /dev/disk entries.
This was the part I understood, but didn't know how to rectify it -> this one's going in my notes
Your array's filesystem is not showing up in blkid. This is why udev is not populating the /dev/disk entries.
This was the part I understood, but didn't know how to rectify it -> this one's going in my notes
This was the part I understood, but didn't know how to rectify it
I have a few ideas on how to fix it depending on the output of some commands. Unfortunately, I don't know how to create the situation to try the solutions on my side.
should you get a fast response to the command?
It took six seconds on my test VM that has 45 drives.
Here is what I would try to fix it:
reboot
# run fsck
fsck.ext4 -f /dev/md127
reboot
That may have been enough to fix it. Is there any output from blkid /dev/md127? If not, move to next step
# change the uuid of the filesystem
tune2fs -U $(uuid) /dev/md127
reboot
That may have been enough to fix it. Is there any output from blkid /dev/md127? If not, I need to think of something else.
Hi,
I appreciate your help so very much! , thank you!
root@pgnas2:~# fsck.ext4 -f /dev/md127
e2fsck 1.43.4 (31-Jan-2017)
Pass 1: Checking inodes, blocks, and sizes
Inode 150011953 extent tree (at level 1) could be narrower. Fix<y>? yes
Inode 150143135 extent tree (at level 2) could be narrower. Fix<y>? yes
Inode 150340913 extent tree (at level 2) could be narrower. Fix<y>? yes
Inode 150627266 extent tree (at level 2) could be narrower. Fix<y>? yes
Inode 150627273 extent tree (at level 1) could be narrower. Fix<y>? yes
Inode 150627291 extent tree (at level 2) could be narrower. Fix<y>? yes
yInode 173017914 extent tree (at level 1) could be narrower. Fix<y>? yes
Inode 177409638 extent tree (at level 1) could be narrower. Fix<y>? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
pgnas2: ***** FILE SYSTEM WAS MODIFIED *****
pgnas2: 360942/274702336 files (1.2% non-contiguous), 2014241798/2197601280 blocks
Alles anzeigen
Reboot
tune2fs -U $(uuid) /dev/md127
tune2fs 1.43.4 (31-Jan-2017)
Please run e2fsck -f on the filesystem.
So i ran it again :
root@pgnas2:~# fsck.ext4 -f /dev/md127
e2fsck 1.43.4 (31-Jan-2017)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
pgnas2: 360942/274702336 files (1.2% non-contiguous), 2014241798/2197601280 blocks
root@pgnas2:~# tune2fs -U $(uuid) /dev/md127
tune2fs 1.43.4 (31-Jan-2017)
Setting UUID on a checksummed filesystem could take some time.
Proceed anyway (or wait 5 seconds) ? (y,N) <proceeding>
reboot
Well, that's fun. What is the output of: wipefs --no-act /dev/md127 (no it won't wipe your drive)
hehe , i am not sure i agree that's its very funny, but it made me laugh anyway
root@pgnas2:~# wipefs --no-act /dev/md127
offset type
----------------------------------------------------------------
0x82fcbbbf000 zfs_member [filesystem]
LABEL: zfsvol1
UUID: 9642737082427346663
0x438 ext4 [filesystem]
LABEL: pgnas2
UUID: 4d724f4a-bdf8-11e9-8883-5fb4dd051701
Alles anzeigen
EDIT : is it not strange that zfs_member is listed here? , when we are asking to view /dev/md127
to give you some background, before the upgrade, i had a three disk zfs volume , and two disks died, so i physically removed those three disks from my box (data loss, but hey raid is not backup)
Br
/Patric
EDIT : is it not strange that zfs_member is listed here? , when we are asking to view /dev/md127
to give you some background, before the upgrade, i had a three disk zfs volume , and two disks died, so i physically removed those three disks from my box (data loss, but hey raid is not backup)
FINALLY!!!! Yes, it is a huge problem that the zfs signature is on the array or disk. Remember this line in my first post on this thread?
I don't have time to reread all of these posts but we see this a lot when moving to OMV 4.x because there are existing zfs signatures on the mdadm array disks
You are going to have to wipe the zfs signature with wipefs. There will probably be more than one signature. So, you will have to repeat the following steps many times.
wipefs --no-act /dev/md127
get the offset from the zfs signature
wipefs --offset 0x82fcbbbf000 /dev/md127
Fantastic!
Thank you soo much @ryecoaaron
After a marathon of :
blkid /dev/md127
/dev/md127: LABEL="pgnas2" UUID="4d724f4a-bdf8-11e9-8883-5fb4dd051701" TYPE="ext4"
and the arrray is now also listed under filesystems in the gui
is openmediavault open for donations? , i would like to contribute to your team!
Best Regards
Patric
Found the donation part on the main website! , contributed!
i still have other issues, i will start a new thread instead.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!