Can´t create RAID / Filesystem - Disks are visible in OMV

  • Hi all,


    I just switched from freenas to OMV and until now eveything went very smooth.
    I now wanted to create a RAID 5 array with my 3*4 TB WD Reds which I used before in the freenas installation.


    The disks are visible in OMV -> Storage -> Disks but I cannot choose them in the create RAID dialogue in order
    to create the array. My first intention was that maybe OMV is somehow satruggling with the ZFS filesystem on the disks so
    I deleted all partitions with parted but still I am not able to create the array.


    lsblk shows the devices and I can find them also in /dev/.


    Any suggestions?


    Regards
    Dennis

  • Thanks for that.


    I just downgraded the kernel to 4.9.0-5 and I am still not able to see the disks in the create RAID dialogue.
    Im curious if it really is an issue with my drives as I have not heard or read yet about issues with kernel 4.13 and obove with the WD40EFRX but just with the 2 and 3TB versions. The 4TB versions are quite common so I think there would be much more people with that issue if it would be an issue with the disks.


    I have a maybe stupid question. Do I have to mount the devices to be able to create the RAID?
    I am no Linux expert but for me it seems as the drives are not mounted yet.

  • As I had some other WebUI issues with 4.x I have freshly installed 3.0.94. Now everything works fine and the disks are shown as candidates.
    I think I will wait until 4.x is stable.

  • If the devices are not listed in the RAID creation dialog, then they are not clean and OMV does not show them as candidates for a RAID. Use the 'Wipe' command under 'Storage | Disks'.

    i had the same problem .. the solution was to wipe the disks.


    thx

  • On my newly setup RAID 5 with 3 x 1TB I had the same issue.


    Only searching the net and forum solved why I coundn't re-Raid5 the 3 disks.
    i was trying what happens if one disk fails (disconnected Power) to see what happens:
    - I got notified, but the notification in OMV should be much more drastic and clear:
    Why is OMV first telling me that it created an automaticlly generated Message? Nobody is interested in this it should directly tell me that there is an issue with one of the disks.


    Potentially with some more formating to highlight this more clearly...
    This way it just goes under with all the other status messages from OMV!


    - The disk was reconnected with no Data on it (the Raid5 array also held no data and FS yet).
    The arry was telling me it is degraded and what disks are content. Why is it NOT telling me in the WebUI WHAT disk had failed?! If it consited of say sda, sdb and sde, and now only sda and sdb are present it is clear to me sde is missing but I have to know this myself, as there is no message what disk was previously in the array.


    This makes reentereing a new or the old disk into the RAID5 a gambling game:
    You need to go to the disk menu and wipe a disk first in this case sde, to make it candidate for the RAID again.
    With multiple disks in the system and not all in the RAID5 if you do NOT exactly know what disk you need to wipe this is a huge source of dataloss and error.


    It should be possible to indicate what dis previously hat been in the Array more clearly if to reenter the array the only solution is to wipe it.


    This all is the case in OMV4.1.6 Arrakis and Linux kernel 4.9.0-0.bpo.6-amd64 ..
    is this different in omv3!?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!