Disappointing First Tests

  • I have spent some hours last months giving a try to OMV 4, and because of issues then also tried OMV5.


    I never managed to boot into OMV4, because of the system couldn't find the boot point. I have search and found the first-aid tutorial but nothing helped. I didn't want to give up so easy and i wanted to give more chance.
    So i moved to version 5 (i thought something has to be better and improved).


    Truth is everything was easy and fine with 5.0.5 , installing, booting, accessing web interface, creating RAID1 (mirror) and enabling SMB / shared folder. ALL GOOD! I was very happy getting this results with stability and speed !!!
    My second test, since my intention is to use the server professionally, is break the raid (remove 1 hard drive out of 2). I have shutdown the system normally and just remove the sata cable.


    The system couldn't even boot, ALERT! /dev/sdc1 not exist (dropping on shell). To my perception this is a lot of risk loosing data in case of 1 drive failure, and this caused boot issues to the system.
    Of course i expect this not to happen, because this is the whole philosophy of RAID, still have your DATA when a hard disk fail, instead of this i couldn't even boot. Not good experience.


    Is there anything else i can do or try ?



    System: E5400 Dual Core / 4GB Ram DDR2 / SSD 120GB for system

    • Offizieller Beitrag

    Of course i expect this not to happen, because this is the whole philosophy of RAID, still have your DATA when a hard disk fail

    Why? Your perception of software raid being the same as hardware raid is nothing new on this forum, you cannot simply 'pull a drive' from a software raid and expect it to come back up as far as the software is concerned it's disappeared, in fact the raid becomes inactive, so there is not loss of data.


    The system couldn't even boot, ALERT! /dev/sdc1 not exist (dropping on shell).

    This is a grub problem, my guess is you installed from a usb flash drive so grub has to be updated, this has nothing to do with a drive being pulled from the array. Your problem with 4 is probably the same issue, grub needs to be updated with the mount point.


    I'll tag @macom he's dealt with the boot issues more times than I have, once that's resolved I can help with the raid, but why raid when you can use one drive for data then rsync to a second.

  • Helloo @geaves and thanks for your help.


    as soon as i put the sata drive back in, the boot from USB flash drive was ok and the raid is also ok.


    My intention is to test raid 1-5-6, since in theory this is for redundancy. Of course 1 drive and then rsync could be a solution for home use, but i'm looking something like raid so is able to repair itself while is online (by replacing the failed drive) because the OMV NAS will be a part of a system working 24/7 and will be accessible from other 4-5 systems.


    If you have any suggestion is very respectful and i'm willing to test it.


    What is the official repair procedure then if a drive fails ?

  • There is no automatic repair procedure, only you can set maybe hot spare drive,to automatically sync. Everthing else you need to do manually.(or write some kung-fu script that recongizes the disk and adds it to raid )

    • Offizieller Beitrag

    as soon as i put the sata drive back in, the boot from USB flash drive was ok and the raid is also ok.

    The last part of that sentence makes sense as the raid now the two drives back and it can see them so the raid comes back clean, the boot from USB is confusing, both from your first and the above, to me something is not right that is why I tagged @macom he maybe be able to shed some light on this for you.


    If you have any suggestion is very respectful and i'm willing to test it.


    What is the official repair procedure then if a drive fails ?

    Let's deal with the second part, if you have set things up correctly you should have received an email informing you of a degraded array, so login, Raid Management -> you should see the Raid as clean/degraded, on the menu select delete, a dialog appears, select the failed drive and click OK, the drive has been removed from the array. You can then remove the drive from the system, add a new drive, under Disks select the drive and wipe even a new one!! Then format it withe same file system as the array, once complete, Raid Management -> Recover from the dialog select the new drive click OK and the Raid will sync with the new drive. Once complete the Raid will come back clean.
    Another option is to have a spare disk already in the system wiped and formatted, then it's just a case of following the above for removal and recovering with a new drive.
    The one thing to set up is email alerts for SMART errors and Software Raid errors so you can act as soon as possible.


    All you have to remember is that Software Raid has to be 'told' what to do otherwise you get what you experienced in your first post, if you run into trouble this is the first port of call the information from those commands will assist in diagnosing the problem.
    In your first post if you had run or could have run cat /proc/mdstat the raid would have returned as inactive, but with further information the raid could have been brought back up (without loss of data) to solve the problem.
    If I went back to raid today I would go with Raid 6 rather than Raid 5, 6 allows for 2 drive failures whereas 5 only 1, I lost a Raid 5 in school having ordered a replacement drive for one that had failed, during the sync process another drive failed.
    _____________________________________________________________________________________


    I have stopped using a raid option, I now use MergerFS + Snapraid but Snapraid is best used on systems that are not constantly accessed and have large files, so for home use it's ideal as most users store Movies, Tv Shows, Photos, Backup's etc, so this option may not be of use.

    • Offizieller Beitrag

    as soon as i put the sata drive back in, the boot from USB flash drive was ok and the raid is also ok.

    Try to boot with the RAID drives and then execute update-grub from the CLI. After that make your test again. Hopefully it will boot then.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!