The issue has been fixed in openmediavault 5.5.18, see https://github.com/openmediava…7fa5fc10149fefa686fc9a036.
Thanks votdev. To upgrde do I just run the command:
I dont jeopardise losing any of the data from the RAID?
The issue has been fixed in openmediavault 5.5.18, see https://github.com/openmediava…7fa5fc10149fefa686fc9a036.
Thanks votdev. To upgrde do I just run the command:
I dont jeopardise losing any of the data from the RAID?
You need to run omv-update or apt-get udate; apt-get upgrade. The update does not touch your data, so it is safe. If you're still worried about that, unplug your devices (with all the problems that arise of this).
You need to run omv-update or apt-get udate; apt-get upgrade. The update does not touch your data, so it is safe. If you're still worried about that, unplug your devices (with all the problems that arise of this).
Thanks votdev. Im upgraded, and could apply the changes. Thanks for your support.
geaves, when I click into my Raid, it still tells me its degraded though.
Version : 1.2
Creation Time : Wed Jun 5 15:42:12 2019
Raid Level : raid5
Array Size : 11720782848 (11177.81 GiB 12002.08 GB)
Used Dev Size : 5860391424 (5588.90 GiB 6001.04 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Dec 17 19:34:36 2020
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : pool-ipv6-pd-omv:Storage
UUID : 7686503f:2d11c7ae:48f9f41b:39240c72
Events : 684113
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 48 1 active sync /dev/sdd
- 0 0 2 removed
2 8 32 - faulty /dev/sdc
Display More
After applying the last changes, I didnt do a restart...should I? Or do I select Recover under RAID?
I didnt do a restart...should I
NO!!
This is looking worse than what it was before
it's showing sdc as faulty, but it's showing a device as removed which I'm going to assume is sdc. The output from above is that from mdadm --detail /dev/md127
cat /proc/mdstat
NO!!
This is looking worse than what it was before
it's showing sdc as faulty, but it's showing a device as removed which I'm going to assume is sdc. The output from above is that from mdadm --detail /dev/md127
cat /proc/mdstat
See...Im learning...I asked this time before rebooting ![]()
root@omv-server:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Wed Jun 5 15:42:12 2019
Raid Level : raid5
Array Size : 11720782848 (11177.81 GiB 12002.08 GB)
Used Dev Size : 5860391424 (5588.90 GiB 6001.04 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Dec 17 21:00:23 2020
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : pool-ipv6-pd-omv:Storage
UUID : 7686503f:2d11c7ae:48f9f41b:39240c72
Events : 684121
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 48 1 active sync /dev/sdd
- 0 0 2 removed
2 8 32 - faulty /dev/sdc
Display More
root@omv-server:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active raid5 sdd[3] sdc[2](F) sdb[0]
11720782848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
bitmap: 40/44 pages [160KB], 65536KB chunk
unused devices: <none>
See...Im learning...I asked this time before rebooting
You get a ⭐️
OK, Raid Management select the raid on the menu click delete, in the dialog does it show /dev/sdc (I would like to try and do this from the WebUI the way it should be done)
I dont have an option for Delete. Its greyed out.
Damn, this is getting weird, do you have the ability to back up your data locally? the reason I say that is because this should be doable from the GUI
If not, we're gonna have to try the following;
madam --stop /dev/md127
mdadm --fail /dev/sdc
mdadm --remove /dev/sdc
The above will hopefully work and show the raid with just the two drives /dev/sd[bd] to confirm that run mdadm --detail /dev/md127 but you may get an error on the --fail
By backing up my data, you mean the information I have on the RAID? Yes, I'm doing that at the moment. I assume once that's backed up we have a lot more options, and we aren't afraid of losing anything.
Yes, I'm doing that at the moment. I assume once that's backed up we have a lot more options, and we aren't afraid of losing anything.
I'm sorry but one gold star is enough ![]()
My thinking is to start over, as it should be possible to remove a drive using the GUI, but the choice is yours, initially you could try what I posted above to start or you start over. BUT!! there is a set procedure before going down that route.
Though
I'm sorry but one gold star is enough
Thought I was on a roll ![]()
Let me get everything backed up, then will go through the steps you say. Thanks!
then will go through the steps you say. Thanks
We can continue this tomorrow
once you're backed up
We can continue this tomorrow
once you're backed up
Might be Saturday or Sunday. Backup is via rsync and a USB HDD. It's slow as there are large files, and it's quite out of date. I'll let you know when competed.
Might be Saturday or Sunday
OK whilst waiting for the paint to dry (rsync to complete) you could consider some options;
Option 1:
#49 This should work, but I'm sceptical due to fact 'delete' is greyed out and may require cli use only
Option 2:
Remove the Raid completely and start the configuration again, but to do that you have to go in reverse, remove smb shares, remove shared folders, unmount and delete the Raid, wipe the drives and start again.
Option 3:
Complete reinstall of OMV, whilst this is a PIA it is usually a last resort.
Option 4:
Should anything go wrong with any of the above and all expletives have been used proceed to option 5
Option 5:
Having totally exhausted all means of getting this to work and you start questioning the meaning of life expel said hardware from bedroom window, it won't do the hardware any good but you might feel better ![]()
Option 5:
Having totally exhausted all means of getting this to work and you start questioning the meaning of life expel said hardware from bedroom window, it won't do the hardware any good but you might feel better
An option I have considered many times for other issues. Never this one though...I have steadfastly kept faith in you!
Before deciding on options 1-3, maybe its worth thinking about the other issue you pointed out back at the start. My OS file system
ballooned up to 100%. I think this happened after a power cut.
So any decision here should probably reflect that I need to fix that too.
Thoughts?
So any decision here should probably reflect that I need to fix that too
I'm assuming that's referencing #5 and this -> Thanks, yes something happened a few weeks ago where i had a power cut and my OS drive filled almost overnight. I was at about 50%, but then it jumped up to 100%.
That would suggest you have a downloader via docker, there's a plugin sharerootfs that allows you to create a share on the OS drive.
But as to what caused the problem you may never find out, I use a usb flash drive, docker points to an independent drive on my system along with any container configs.
Going back to your #48 the Remove option is also greyed out in your image and have spotted my own error, Remove will remove a drive from the array even whilst it's mounted, Delete will delete the array but the raid needs to be unmounted in File Systems.
I've just looked at this in a VM
Ok geaves Paint has dried!
Are we starting with option 1 first?
If not, we're gonna have to try the following;
madam --stop /dev/md127
mdadm --fail /dev/sdc
mdadm --remove /dev/sdc
The above will hopefully work and show the raid with just the two drives /dev/sd[bd] to confirm that run mdadm --detail /dev/md127 but you may get an error on the --fail
Display More
Are we starting with option 1 first
You could start there first, if all three work, then you'll have to wipe the drive, then add it back to the array.
Don’t have an account yet? Register yourself now and be a part of our community!