After re-creating md0 mounting not possible

  • Hey there.

    I use OMV for many years now and never had any problems. For me, Linux is a closed book. So I need some help.

    My OMV contains three software raids: md0, md1, md2. Some days before I deleted md0 and created a new raid which gets the md0 again. After the raid was created succesfully I wanted to mount the file system (ext4), but I always get a failure 500 - Internal Server Error (see below). Can somebody help please?! :saint:


  • Bigwilma

    Added the Label OMV 6.x
  • Did you create a filesystem on the new raid? it can't mount if there is no filesystem.

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • I'm having a similar problem, but my mistake is 500 - Internal Server ErrorFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color fstab 2>&1' with exit code '100': ERROR: The state 'fstab' does not exist

  • I would recommend checking the smart status on the drives. If they check ok, do a secure wipe on them and then recreate the raid and filesystem.

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • Smart status of all drives is 'good' (green).

    Since the creation takes >50 hours, I wanted to avoid this. But I guess I have no other choice.

  • Smart status of all drives is 'good' (green).

    Since the creation takes >50 hours, I wanted to avoid this. But I guess I have no other choice.

    I know, building a mdadm RAID in anything other than a RAID 1 or 10 is painful. There are faster ones to build, BTRFS and ZFS, but I have never used them in OMV, so I can't tell you the exact process. If I recall correctly though, BTRFS is not reliable for a RAID 5 or 6 style

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • Bigwilma

    Added the Label resolved
  • You might want to read the man page of wipefs

    This could have saved you lots of time the full wipe and the array sync will have taken.

    I used it once to remove some (very) old zfs signatures from my disks in an mdadm raid-5 array.

    • Official Post

    You might want to read the man page of wipefs

    This could have saved you lots of time the full wipe and the array sync will have taken.

    I used it once to remove some (very) old zfs signatures from my disks in an mdadm raid-5 array.

    Agree 100%. I almost never do a full wipe. I use wipefs and sometimes will write zeros to the first couple hundred MB of the disk.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!