Filesystem Mountpoint Missing

    • Official Post

    Thanks votdev. To upgrde do I just run the command:


    Code
    omv-release-upgrade

    I dont jeopardise losing any of the data from the RAID?

    You need to run omv-update or apt-get udate; apt-get upgrade. The update does not touch your data, so it is safe. If you're still worried about that, unplug your devices (with all the problems that arise of this).

  • You need to run omv-update or apt-get udate; apt-get upgrade. The update does not touch your data, so it is safe. If you're still worried about that, unplug your devices (with all the problems that arise of this).

    Thanks votdev. Im upgraded, and could apply the changes. Thanks for your support.


    geaves, when I click into my Raid, it still tells me its degraded though.


    After applying the last changes, I didnt do a restart...should I? Or do I select Recover under RAID?

    • Official Post

    I didnt do a restart...should I

    NO!!


    This is looking worse than what it was before :( it's showing sdc as faulty, but it's showing a device as removed which I'm going to assume is sdc. The output from above is that from mdadm --detail /dev/md127


    cat /proc/mdstat

  • NO!!


    This is looking worse than what it was before :( it's showing sdc as faulty, but it's showing a device as removed which I'm going to assume is sdc. The output from above is that from mdadm --detail /dev/md127


    cat /proc/mdstat

    See...Im learning...I asked this time before rebooting :)


    Code
    root@omv-server:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
    md127 : active raid5 sdd[3] sdc[2](F) sdb[0]
          11720782848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
          bitmap: 40/44 pages [160KB], 65536KB chunk
    
    unused devices: <none>
    • Official Post

    See...Im learning...I asked this time before rebooting

    You get a ⭐️


    OK, Raid Management select the raid on the menu click delete, in the dialog does it show /dev/sdc (I would like to try and do this from the WebUI the way it should be done)

    • Official Post

    I dont have an option for Delete. Its greyed out.

    Damn, this is getting weird, do you have the ability to back up your data locally? the reason I say that is because this should be doable from the GUI


    If not, we're gonna have to try the following;


    madam --stop /dev/md127


    mdadm --fail /dev/sdc


    mdadm --remove /dev/sdc


    The above will hopefully work and show the raid with just the two drives /dev/sd[bd] to confirm that run mdadm --detail /dev/md127 but you may get an error on the --fail

    • Official Post

    Yes, I'm doing that at the moment. I assume once that's backed up we have a lot more options, and we aren't afraid of losing anything.

    I'm sorry but one gold star is enough :)


    My thinking is to start over, as it should be possible to remove a drive using the GUI, but the choice is yours, initially you could try what I posted above to start or you start over. BUT!! there is a set procedure before going down that route.

    • Official Post

    Might be Saturday or Sunday

    OK whilst waiting for the paint to dry (rsync to complete) you could consider some options;


    Option 1:


    #49 This should work, but I'm sceptical due to fact 'delete' is greyed out and may require cli use only


    Option 2:


    Remove the Raid completely and start the configuration again, but to do that you have to go in reverse, remove smb shares, remove shared folders, unmount and delete the Raid, wipe the drives and start again.


    Option 3:


    Complete reinstall of OMV, whilst this is a PIA it is usually a last resort.


    Option 4:


    Should anything go wrong with any of the above and all expletives have been used proceed to option 5


    Option 5:


    Having totally exhausted all means of getting this to work and you start questioning the meaning of life expel said hardware from bedroom window, it won't do the hardware any good but you might feel better :D

  • Option 5:


    Having totally exhausted all means of getting this to work and you start questioning the meaning of life expel said hardware from bedroom window, it won't do the hardware any good but you might feel better :D

    An option I have considered many times for other issues. Never this one though...I have steadfastly kept faith in you!


    Before deciding on options 1-3, maybe its worth thinking about the other issue you pointed out back at the start. My OS file system
    ballooned up to 100%. I think this happened after a power cut.


    So any decision here should probably reflect that I need to fix that too.


    Thoughts?

    • Official Post

    So any decision here should probably reflect that I need to fix that too

    I'm assuming that's referencing #5 and this -> Thanks, yes something happened a few weeks ago where i had a power cut and my OS drive filled almost overnight. I was at about 50%, but then it jumped up to 100%.


    That would suggest you have a downloader via docker, there's a plugin sharerootfs that allows you to create a share on the OS drive.


    But as to what caused the problem you may never find out, I use a usb flash drive, docker points to an independent drive on my system along with any container configs.

    • Official Post

    Going back to your #48 the Remove option is also greyed out in your image and have spotted my own error, Remove will remove a drive from the array even whilst it's mounted, Delete will delete the array but the raid needs to be unmounted in File Systems.


    I've just looked at this in a VM

  • Ok geaves Paint has dried!


    Are we starting with option 1 first?


Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!