Problem with RAID not seen on installation

  • Hi all,


    I am reinstalling OMV (latest version) onto an existing machine that was running an older version of OMV (version 5 or 6, not sure).


    It is an HP Microserver Gen 7 which boots off a 2.5 inch SSD mounted in the ODD bay and has 2 x 2TB SATA hard drives in a RAID 1 in the first two caddy slots.


    The machine was running fine but suffered a failure of the SSD and (obviously) stopped functioning, so I inserted a new SSD and installed OMV 7 to it.


    Initially, I left the RAID drives out for safety until I had the installation complete and could prove the system functionality. Once that was done, I shut the machine down, inserted the RAID drives and rebooted.


    After the system came up, the RAID did not appear in the dashboard. However, using a directly attached keyboard and screen I could see the RAID was there. I could manually mount it to /mnt and the contents were visible, so they RAID was not corrupted.


    I thought it might be because the RAID was not detected at installation time, so I did the installation again, but with the RAID disks installed. During the partitioning selection process it offered a RAID1 array as a possible target, so the detection was successful. However, after the initial boot, the RAID manaagement option was still missing from the dashboard.


    I researched this issue in the forum with no success, so I am embedding some printouts which seemed relevant in other cases


    BR


    Mick


    Code
    root@odysseus:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb1[1] sdc1[0]
          1953382400 blocks super 1.2 [2/2] [UU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>


    Code
    root@odysseus:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/0 level=raid1 num-devices=2 metadata=1.2 name=odysseus.internal.net:0 UUID=270a3820:d7386050:f7eaaaa7:d41c577a
       devices=/dev/sdb1,/dev/sdc1


    The following mdadm.conf is unmodifed from the fresh install


    Edited once, last by mick_d: Array was auto-read-only, but cleared when manually mounted with "mount /dev/md0 /mnt" ().

  • macom

    Approved the thread.
    • Official Post

    the RAID manaagement option was still missing from the dashboard.

    You need to install the openmediavault-md plugin


    However, you should be able to mount the filesystem on the raid from the filesystem tab.

  • Hi Macom,


    OK, that plugin has added a "Multiple Devices" option to "Storage" and I was able to add the Filesystem.


    I must admit it was a bit counter-intuitive to have to add the plugin. I don't seem to recall doing that before, but Hey-ho!, it's workling now.


    Thanks for the prompt info.


    BR


    Mick

  • mick_d

    Added the Label resolved
    • Official Post

    I must admit it was a bit counter-intuitive to have to add the plugin. I don't seem to recall doing that before, but Hey-ho!, it's workling now.

    md raid was separated to a plugin for OMV 7.x because most people don't (and shouldn't) use raid.

    omv 7.4.10-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.14 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Official Post

    thought it was as much to do with volker's preference for BTRFS raid than anything else

    Maybe but I have been pushing to have md, ftp, and tftp moved to plugins for a very long time. md was just finally done.

    omv 7.4.10-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.14 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Please share with me the ideal setup for multiple disks since md plugin appears to be installed on my fresh 7.4.5.1{Sandworm) over Linux Kernel 6.1.57 and I have no RAID option in the Storage choices.

    CM3588 NAS linux 6.1.57

    aarch64 GNU/Linux on EEMC

    OMV 7.4.5-1

    4 - 4Tb TG MP44 PCIe

    • Official Post

    Please share with me the ideal setup for multiple disks since md plugin appears to be installed on my fresh 7.4.5.1{Sandworm) over Linux Kernel 6.1.57 and I have no RAID option in the Storage choices.

    That probably means that the hard drives are connected via USB ports.

    Share with us what your needs are to use a Raid and what your hardware is. Without that information it is difficult to give advice.

  • Thanks I Understand that:

    cm3588 nas compute with 4 4Tb onboard flash drives install was Sd to eemc on internal drive storage (eemc)

    dm

    CM3588 NAS linux 6.1.57

    aarch64 GNU/Linux on EEMC

    OMV 7.4.5-1

    4 - 4Tb TG MP44 PCIe

    • Official Post

    cm3588 nas compute with 4 4Tb onboard flash drives

    ok, in that case the drives should be available to create any file system, including a Raid.

    If you really want a Raid you have several options. You can create a mdadm Raid in the Storage>md tab. You can create a BTRFS Raid directly in the Storage>File systems tab. You can create a ZFS Raid by installing the openmediavault-zfs plugin.

    Why do you think you need a Raid?

  • Chente

    I do not need to have raid at all. The best practice for this datastore will rely perhaps on a recommendation from you.

    Thanks for your help

    CM3588 NAS linux 6.1.57

    aarch64 GNU/Linux on EEMC

    OMV 7.4.5-1

    4 - 4Tb TG MP44 PCIe

  • Zfs - little issue with that in that /sbin/modprobe zfs returns Module zfs not found in directory /lib/modules/6.1.57

    CM3588 NAS linux 6.1.57

    aarch64 GNU/Linux on EEMC

    OMV 7.4.5-1

    4 - 4Tb TG MP44 PCIe

    • Official Post

    Zfs - little issue with that in that /sbin/modprobe zfs returns Module zfs not found in directory /lib/modules/6.1.57

    If you don't need raid, why are you using zfs?


    You have to build the zfs module. That requires installing kernel headers and then the zfs-dkms package.

    omv 7.4.10-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.14 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Official Post

    I do not need to have raid at all. The best practice for this datastore will rely perhaps on a recommendation from you.

    If you do not have any backup, I think the most prudent thing would be to make a pool with two 4TB drives and another pool with the other two 4TB drives. This way you can configure an rsync task that regularly runs a backup between the two pools.

    If you have a backup somewhere else you already have the most important thing, so the simplest thing would be to make a pool with the 4 4TB hard drives.

    The openmediavault-mergerfs plugin will provide you with what you need to make a pool. https://wiki.omv-extras.org/do…mv6:omv6_plugins:mergerfs Note: Although this document was written for OMV6 it is still valid for OMV7.

    Since I see that you have no reason to configure a Raid, I wouldn't consider other things.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!