RAID does not stay!

  • No matter what I do, I create a RAID5 using 3 brand new drives (I have 9 new of these Seagate RED 2TB) I tried all my drives. OMV-Latest Build
    The RAID never sticks. Here is some of what is going on:


    1. Create a RAID 5 with the 3 new drives>Add shared folders>remove shared folders>Restart>Raid Missing (I never touch the file system)
    2. Create a RAID 5 with different new drives>Add a share>restart>RAID STILL VIABALE>Shut down>Startup>RAID MISSING
    3. Create a RAID 5 with yet again unopened drives>Add Shares>Setup CIFS>Add shares in CIFS>Restart>RAID VIABLE>remove Shares in CIFS>RAID MISSING


    I am a Linux noobie, but out of frustration I have to say that OMV has major issues with RAID. I cannot trust it to put anything on it without the fear of losing it the minute I do something. (Bad or Good)
    I have a brand new ASROCK Server Mini-ITX Board. Any help appreciated. Thanks

    • Offizieller Beitrag

    You don't mention creating the filesystem and mounting it? What version of OMV?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Did you mount it?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What type of media is OMV installed on?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I had a similar thing happen to me - a newly created raid 1 device did not survive a reboot.


    I build a new OMV system (1.0.20 - x64) with 2* 3TB disks in Raid 1 just now.
    I created a raid 1 setup which started ok and began the sync process, all as it should.


    After a reboot during the sync process, the raid device /dev/md0 turned into /dev/md127. Hence the array previously created could not start up and continue to sync.


    I knew that this can happen if the initramfs is not updated after the initial raid build.
    I deleted the md127 device and re created the md0 device. I manually triggered the initramfs update when I noticed an error message pointing to a file in: /usr/share/doc/ called mdadm/README.upgrading-2.5.3.gz. The file did not exist in the OMV file system, so after a bit of searching on the net I found the file. It described that the issue is related to this file: /var/lib/mdadm/CONF-UNCHECKED. I located the file in the OMV file system and renamed it.
    Once this file was renamed, the initramfs update worked without complaint and used the /etc/mdadm/mdadm.conf file as it should.
    After a reboot, the /dev/md0 device was now retained - as it should.
    However, the re syncing still did not work. The array was sitting there as sync status "Pending".
    A 'mdadm --readwrite /dev/md0' fixed that - starting the re-sync process.
    All appears ok for now.


    This appears to be a 'feature' of Debian. The above is a workaround but it not really a solution. Perhaps I missed a step when creating the initial raid 1 via the Web Gui ?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!