OMV OS Drive Failed, How to recover RAID

  • Hi all, I'm new to this forum.


    I have been running OMV for just over a year. all was great until yesterday. I had the OS Drive fail.


    I had done some googling and disconnected the 3 RAID 5 Drives and did a fresh OMV install onto a new OS disk. I have done the upgrades and once happy shutdown and re-attached the raid drives.


    When i login via the GUI I can see the Raided disks but can't access these please help


  • root@snowdon:/# mdadm --detail /dev/md0
    /dev/md0:
    Version : 1.2
    Creation Time : Tue Apr 19 10:53:32 2016
    Raid Level : raid5
    Array Size : 3906766848 (3725.78 GiB 4000.53 GB)
    Used Dev Size : 1953383424 (1862.89 GiB 2000.26 GB)
    Raid Devices : 3
    Total Devices : 3
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Tue Apr 19 12:38:53 2016
    State : clean, resyncing
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Resync Status : 3% complete


    Name : snowdon:Nixdy (local to host snowdon)
    UUID : 5400d348:93002dd3:89c85e22:c933904c
    Events : 364


    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc
    2 8 16 2 active sync /dev/sdb

  • Hi, When I go into WEBUI I can go into Raid Settings and see the Raided three disks, however when I go into FileSystems I can only see the OS Disk.


    Also when I go into Shares to create a new Share the Volume is not displaying

  • Hi All resyncing has been running all day, But I still can't see the Raid in Shares as a Volume do I need the resend to finish first. I have uploaded some screenshots of what my issue is, hopefully these will help


    root@snowdon:/# sudo cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdd[0] sdb[2] sdc[1]
    3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    [=======>.............] resync = 35.3% (690200064/1953383424) finish=1239.5min speed=16984K/sec
    bitmap: 10/15 pages [40KB], 65536KB chunk


    unused devices: <none>

  • Hi All, The resync has finally finished. I can see the RAID Disks but I can't access them in filesystem or shared folders. Please advise. I have added in some screen shots. I need to get the data from these disks as its all work related items that I need to access.


    Please can someone let me know what I'm doing wrong, From the initial Disk Failure I have renewed the Disk done a fresh OMV install updated acordingly then re-attached the raid disks, and this is as far as I can get



    unused devices: <none>
    root@snowdon:/# sudo cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdd[0] sdb[2] sdc[1]
    3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk
    unused devices: <none>


    Please note when I go into shared folders it doesn't display the RAID volume

  • Bumping this thread, if anyone can give any advise here it would be really appreciated


    I had a Hard Drive Failure on my system. Current set-up is 3 x 2 TB Barracuda Green disks Raided with the OS on a separate disk.


    I installed a new Hard disk for the OS and did a fresh install, however I can't access the RAID Volume. I can see the raid in the GUI but can't access it, any help here would be really appreciated as I need the data on these disks.


    root@snowdon:/# sudo cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdd[0] sdb[2] sdc[1]
    3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk

  • Thanks Subzero79


    root@snowdon:/# blkid
    /dev/sdb: UUID="5400d348-9300-2dd3-89c8-5e22c933904c" UUID_SUB="3ba9adbf-131c-907a-c765-ced561cac6b3" LABEL="snowdon:Nixdy" TYPE="linux_raid_member"
    /dev/sdd: UUID="5400d348-9300-2dd3-89c8-5e22c933904c" UUID_SUB="a082aebb-868f-a1fb-34a2-4ad00d3433fa" LABEL="snowdon:Nixdy" TYPE="linux_raid_member"
    /dev/sda1: UUID="38f45487-8693-4b71-ad79-eb3d2797380b" TYPE="ext4" PARTUUID="00096f5f-01"
    /dev/sda5: UUID="e853670d-1950-4a3a-bff8-79afe94905b2" TYPE="swap" PARTUUID="00096f5f-05"
    /dev/sdc: UUID="5400d348-9300-2dd3-89c8-5e22c933904c" UUID_SUB="4814b415-3105-bae1-fe1b-6e83251226e6" LABEL="snowdon:Nixdy" TYPE="linux_raid_member"
    /dev/md0: UUID="9261963a-f8e3-8fdc-c858-ef5936b06251" UUID_SUB="90d7d666-3d12-4884-cc83-6e0a86f82099" LABEL="snowdon:Nixdy" TYPE="linux_raid_member"

  • Afraid not, we just moved house so the latest backup I have is 4mths old. Only thing I have done is replace the failed OS disk then reinstalled OMV, so not sure how this could have happened


    For info


    root@snowdon:/# mdadm --readwrite /dev/md0
    mdadm: failed to set writable for /dev/md0: Device or resource busy
    root@snowdon:/# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdd[0] sdb[2] sdc[1]
    3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    bitmap: 0/15 pages [0KB], 65536KB chunk


    unused devices: <none>
    root@snowdon:/#

    • Offizieller Beitrag

    That output says the array is running fine. Probably just not mounted. Try mount -a

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    You just want mount -a. What is the output of: mount

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Output from mount


    root@snowdon:/# mount


    sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)


    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)


    udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=461692,mode=755)


    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)


    tmpfs on /run type tmpfs (rw,nosuid,relatime,size=756560k,mode=755)


    /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)


    tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)


    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)


    rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw,relatime)

    • Offizieller Beitrag

    What is the output of: cat /etc/fstab | grep md

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!