Help! Me too... Raid5 disappeared after Update to OMV 3.0.85 and reboot

  • Hi everyone,


    got a big problem and got stuck in finding a solution...


    Last week I installed OMV and created a Raid 5 consisting of 5 3TB WD Red Drives through the WebGUI (/dev/sda - sde). After initialization of the Array I created a filesystem (EXT4), started to set up my shares etc. and meanwhile copied over 6 TB Data on the Array (which are very important to me...). Today I saw in the WebGUI some Updates, incl. OMV 3.0.85. Installed them and thought a reboot wouldn't be a bad idea. Indeed, it was a very bad idea because after the reboot my Raid 5 is gone...


    I've already read several threads here and in general linux forums which are concernd with that kind of problem, but none of the solutions worked for me. So I decided to ask for assistance here.


    After reading several threads I tried:



    Any help would be highly appreciated ?(?(


    Thanks in advance & kind regards


    Chris (aka. CryptoWorX)

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

  • Did you reboot between RAID initialization and this last reboot already?

    I had to look through the logs - No...

    As there are obviously no partitions listed/created on the Array's Members - could it have to do with "DEVICE partitions" in the mdadm.conf? ?(

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

  • Another thought...


    I checked the Disks with Testdisk and found recoverable partitions:


    Maybe partitions were not written to Disks? But how could the Array then have been accessible before????


    I'm confused...

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

  • As there are obviously no partitions listed/created on the Array's Members - could it have to do with "DEVICE partitions" in the mdadm.conf?

    No idea. I deal now for 2 decades with (failed) RAIDs and that's the reason I avoid RAID whereever possible. As you might've already noticed RAID is only about availability but provides zero data protection features (and when you trust in it without extensive testing you're always lost). I hope you've a backup of your data?

  • I hope you've a backup of your data?

    Not of all - the OMV NAS ist mainly used as backup-device for my Win2012R2 Server. Some shares are only synchronized twice a week, so I might loose some data... Not essentially, but annoying...


    What Do you think on the Testdisk results? Worth to try?

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

  • What Do you think on the Testdisk results? Worth to try?

    No idea. I dealt the last time with mdraid and RAID5 maybe 18 years ago and gave up entirely on the idea (especially data recovery since prevention + backup is the simpler approach). But since I also read some threads here the last few weeks about RAIDs vanishing I would believe it's related to the config not written to disk (so everything is gone after the first reboot -- better do this the next time before filling the array with data).


    Hope you succeed reconstructing the array and then take your time to come up with better concepts (RAID without any backup is nothing I would even think about)

  • I would believe it's related to the config not written to disk

    But if you think it´s not a user error but a problem of OMV or the underlying Linux, shouldn´t it further explored?
    Should @CryptoWorX file a bug report?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • I would believe it's related to the config not written to disk (so everything is gone after the first reboot -- better do this the next time before filling the array with data)

    Yes, in my opinion too... But why? Created the Array, clicked "Save changes". After initializing the Array, created the filesystem, clicked "Save changes" and so on.


    Will try what happens, when I recover the partitions with Testdisk. Can't get even more worse at the moment... ;(

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

    • Offizieller Beitrag

    Should @CryptoWorX file a bug report?

    No. I've tried for a long time to figure out why people lose their arrays on reboot. Never found a consistent reason (other than consumer hardware or not wiping the drive well or usb drives or on an RPi). I have used and still use mdadm raid and have never had a problem. I don't think a bug report would help.


    What Do you think on the Testdisk results? Worth to try?

    OMV doesn't use partitions for mdadm raid. If you restored partitions that were leftover from previous use of the drive, it would probably ruin the array. I would boot systemrescuecd on the box to get a newer set of mdadm tools and try to force assemble the array again.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Can't get even more worse at the moment.

    Yes it can....

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • There is a decent man page on mdadm. Lots of options and some may be of interest - maybe assemble but there are others. Some googling on specific options might help further.


    https://linux.die.net/man/8/mdadm


    3.4 seems to be the latest stable release.


    I've been using mdadm for mirrored raid on my pc for some time now. 4 and a bit years but only recently updated. Curiously when I installed the full system update it ignored the fact that I mounted md0 to /home. I have wondered if this was a compatibility problem however it turned out that one of the disks had been failed. I suspect that was down to a single solitary power fail. I don't check the array that often so can't be 100% sure.


    :D Seems to me that many linux disk formats are great at spotting problems and the better they are at that the worse they are at avoiding them. I'm inclined to say all actually. Maybe ext2 gets writes done as quickly as possible.


    Edit
    Looking at the help built into mdadm there are lots of options. Far more than there used to be. Subtopics are obtained by following the option with --help. It's pretty complicated.


    John
    -

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!