it started ...
I get error (i`m at the first one) .. Should I "ignore" it ?
I see option yes/no.
Its' asking if you want to repair the error which you do, so yes
it started ...
I get error (i`m at the first one) .. Should I "ignore" it ?
I see option yes/no.
Its' asking if you want to repair the error which you do, so yes
The array is now active, either 'mount an existing file system' -> Storage -> File Systems or reboot
output of cat /proc/mdstat
Can't remember if this works with mdadm running -> mdadm --readwrite /dev/md0 if there is an error or some other failed output run mdadm --stop /dev/md0 then the --readwrite command
The array is inactive run the following
mdadm --stop /dev/md127
wait to confirm that mdadm has stopped, then,
mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcdef]
the array should rebuild/sync and should display the output in software raid, once completed the array can be mounted
no, it doesn't show up under software raid.
Until the array is displayed under software raid the file system will not be available to mount.
post the output of cat /proc/mdstat please post the output in a code box this symbol </> on the forum bar, makes it easier to read
Any ideas?
I ask again does the array display in Raid Management
The question is, is the array displayed/shown in Raid Management?
Btw does omv always consume whole drive where installed
Yes, what you did with /dev/sdd was to format it, then you tried to add it to the existing array, with OMV this will not work you need to add the blank drive to the array this ensures that the metadata etc is the same across the array.
What you will need to do, Storage -> Wipe, select /dev/sdd, from the menu select wipe, then select secure, when finished ->
Raid Management -> now the menu there is an option to add a drive, you are using V4, I can't remember if the option is restore or rebuild it's certainly not grow, from the dialog box select the drive and click ok, this will add the drive to the array.
or is it rebuilding now
I don't know the image you are showing is the file system which is initially irrelevant, Raid Management will show that information or from the cli cat /proc/mdstat
The array is inactive, and it's showing 3 drives available in the array, to get it started run ->
mdadm --stop /dev/md0 wait for the output, then ->
mdadm --assemble --force --verbose /dev/md0 /dev/sd[bce] this should assemble the array, wait for the rebuild/sync to complete
is /dev/sdd1 the new drive that you attempted to add/wanted to add to the array to replace the failed one?
You need to add a DNS entry to your network interface
Network -> Interfaces -> select it and click edit on the menu
Versio that Im using is openmediavault_4.1.3
This is EOL current version is v7
After drive failed on RAID 5 im lost. Replaced drive with same size disk but Raid file system says its missing, and options are greyd out
This is probably where it went wrong, mdadm (software raid) is not plug and play it has to told what to do in a specific order
ssh into omv as root and run the following two commands and post the output in a code box, this symbol </> on the forum menu bar, makes it easier to read.
cat /proc/mdstat
blkid
Its posible grow a RAID 1 array?
No
Convert in RAID 10
No, not without backing up your data first
Does this mean i have 2 bad drives
No, according to that you have one, mdadm knows there should be 7 drives in the array, it also knows that 6 drives are active and 6 drives are working, so your Raid6 should have 7 working drives, not 8
OMV uses the full block device when creating an array, unless the user uses partitions and creates the array from the cli
Is this becasue i removed the bad drive
Yes, probably, a drive has to be failed, then removed from an array
e.g. mdadm --fail /dev/md127 /dev/sdX then mdadm --remove /dev/md127 /dev/sdX -> X being the drive reference letter
Is it skipping because they are already in md127
No it's skipping because something has permanent access to the array
you could try mdadm --run /dev/md127 but I don't think that will have an effect
mdadm: stopped /dev/md127
Now run the assemble command in #7
i ran mdadm --assemble but get this
The answer is in the output, it's skipping because mdadm is running
So the array is still inactive, until it's active and in sync it's not going to mount
mdadm --stop dev/md127
mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdefg]
this should assemble the array and sync the array, then reboot and the array should mount but it must assemble and sync first