Having trouble reconstructing RAID 5 array after reinstall of OMV

  • Hello all,


    I replaced my OMV mobo due to one SATA port failed. Replaced mobo, installed OMV 2.1, shut down and hooked up drives, rebooted and is running fine. I can 'see' my RAID 5 array in the web administrator, and I'm unsure how to reassemble/reconstruct it.


    The Raid Management Detail shows:
    Version : 1.2
    Creation Time : Fri Dec 11 14:38:57 2015
    Raid Level : raid5
    Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
    Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
    Raid Devices : 5
    Total Devices : 5
    Persistence : Superblock is persistent


    Update Time : Sat Dec 31 02:03:00 2016
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : NAS:Raid (local to host NAS)
    UUID : b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    Events : 7510


    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc
    2 8 80 2 active sync /dev/sdf
    3 8 64 3 active sync /dev/sde
    4 8 16 4 active sync /dev/sdb


    blkid:


    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="2765d62b-01e5-48bc-935c-eecb2352dd56" TYPE="ext4"
    /dev/sda5: UUID="006da519-dbf9-41e2-8774-6192670c8b9f" TYPE="swap"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="b068620f-28aa-8ff4-8312-d3f7a79921db" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/md127: LABEL="share" UUID="a0a9808b-f7e5-48fe-9d41-c8c0ff053887" TYPE="ext4"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"


    mdstat:


    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sdb[4] sde[3] sdf[2] sdc[1]
    15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]


    It appears that the array is fine. However, since I am a Linux novice, I'm unsure how to rebuild this array from cli. I would appreciate any help from the community so I can avoid making any serious. Thanks in advance for your help.

  • @1dx


    I'm not clear what exactly your problem is.
    The output of the mdadm raid says: state clean.


    Raid is clean and can be mounted.


    Means for me you only have to add the shares.
    Filesystems - > Filesystem -> ADD.


    BR Robert

    OMV 5.x always up to date.
    Modded dell t20 into 19" rack case with Pearl LCD Display (Status Display!)

    xeon e3-1225v3 / 32GB RAM / 1x500GB WD Blue SSD (OS) / 1x250 SSD (not used) / 1x1 TB Toshiba HHD (MultiDisk) / 4x 4TB WD40EFRX (Raid5)

    • Offizieller Beitrag

    md127 : active (auto-read-only) raid5 sdd[0] sdb[4] sde[3] sdf[2] sdc[1]

    The array is in read only mode for some reason. mdadm --readwrite /dev/md127

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!