Posts by 1dx

    Swapped out faulty drive and md127 just finished recovering. Then mounted drive in file systems (wasn't mounted before), then ran omv-mkconf on fstab and mdadm. How do these look?


    Now it looks like I will have to set up file systems.

    Quote

    There's no reference to the raid in either, so omv-mkconf fstab same again but mdadm

    Okay, ran the omv-mkconf fstab and below are fstab and mdadm.conf:

    Check your fstab and mdadm entries


    Update: The RAID is showing up on the OMV RAID Management, Showing clean with all 5 drives listed.


    SMART shows that sdb is red due to Reallocated Sector Count. I'll have to replace that drive today. I wonder if this is why it's saying: "not enough to start the array"?

    The array stopped:


    mdadm --stop /dev/md127
    mdadm: stopped /dev/md127

    Then tried to assemble:

    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 4.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sde is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdf is identified as a member of /dev/md127, slot 3.
    mdadm: forcing event count in /dev/sdf(3) from 14035 upto 14059
    mdadm: forcing event count in /dev/sdc(2) from 13901 upto 14059
    mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdc
    mdadm: clearing FAULTY flag for device 4 in /dev/md127 for /dev/sdf
    mdadm: Marking array /dev/md127 as 'clean'
    mdadm: added /dev/sdd to /dev/md127 as 1
    mdadm: added /dev/sdc to /dev/md127 as 2
    mdadm: added /dev/sdf to /dev/md127 as 3
    mdadm: added /dev/sdb to /dev/md127 as 4
    mdadm: added /dev/sde to /dev/md127 as 0
    mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.


    The array has 5 drives, which it should, but it's saying that's not enough to start.

    ']ou'll need to assemble it mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]


    Thank you geaves for the reply.


    Just tried to assemble it:
    [tt]
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcdef]
    [mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sde is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    /tt]


    Not sure what that means.

    Recently did a clean install of OMV 4.1.22 (from 3.0.99). Used a new system SSD and disconnected my RAID 5 array. Shut down after initial install and reconnected RAID. Rebooted and checked the OMV web interface and RAID was not listed. All the disks were listed in DISKS, but nothing in RAID management.


    I am looking for help getting the RAID back up. Information Below:


    cat /proc/mdstat

    Code
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>

    blkid

    Code
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"


    fdisk -l | grep "Disk "


    Code
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2


    cat /etc/mdadm/mdadm.conf[/b]





    mdadm --detail --scan --verbose[/b]




    Code
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>


    cat /etc/fstab[/b]





    Drive Type: all 5 drives are 4TB SATA disks 4HGST, 1 WD Red



    Stopped working after clean install of OMV 4.1.22

    Hello,


    With the help of KMO201, I was able to get closer to restoring my RAID array (see info in post directly above). However being a noob, I'm unsure as to the steps to restore my RAID. Would very much appreciate some help in determining what steps to take next.


    Thanks.

    Installed OMV 4.1.22, then shut down and connected the RAID drives and rebooted. Looks promising, see below.
    Note that the mdadm detail scan now shows INACTIVE-ARRAY /dev/md127. However the array does not show up in the RAID Management GUI.
    Hopefully all it needs is a mount or something? if so, how do I go about it?


    Thanks!


    Disk DATA:


    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>
    _________________
    blkid
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"
    _________________
    root@NAS:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2
    _________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _________________
    mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[4](S) sdf[3](S) sdc[2](S) sde[0](S) sdd[5](S)
    19534437560 blocks super 1.2
    unused devices: <none>
    _________________
    blkid
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="8c781b3b-33bd-42bb-ba23-0cdc03a68fdc" TYPE="ext4" PARTUUID="d73019f2-01"
    /dev/sda5: UUID="98699062-8fa7-403b-b0f0-6a4226840926" TYPE="swap" PARTUUID="d73019f2-05"
    _________________
    root@NAS:~# fdisk -l | grep "Disk "
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
    Disk identifier: 0xd73019f2
    _________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _________________
    mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    _________________

    Hello,


    About two months ago, I had one of the 5 4TB drives fail in my RAID array. I replaced the drive and the array rebuilt and everything worked fine. a week or so later, I started getting intermittent 'Spares missing" errors. This was odd, but these errors went away before I could figure out what was causing them. A few weeks later, I started having trouble with the OMV Web GUI. Every time I would try to change any parameter I would get the message "The configuration has been changed. You must apply the changes in order for them to take effect", hit the Apply button, and get an error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; omv-mkconf postfix 2>&1' with exit code '127': /usr/share/openmediavault/mkconf/postfix: 57: /etc/default/openmediavault: 4c8bc97f-1a07-4e3b-ba6b-7f3aea225fd1: not found


    Also lost the ability to do updates on the GUI, and lost the ability to SSH in anymore. Raid was still working, so I backed up the data and system, disconnected the RAID disks, and did a reinstall of OMV. After the reinstall, the RAID array didn't show up on the GUI. All the disks are there, but no option to mount. I'm at a loss as what to do next. I'd really like to get the RAID mounted, rather than restore the data (takes a long time). Hope someone has an idea to restore the RAID.


    Disk DATA:
    _______________________
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>
    _______________________
    blkid
    /dev/sda1: UUID="df5470bd-d363-48c2-89a4-77b5f082db99" TYPE="ext4" PARTUUID="d2f4adb0-01"
    /dev/sda5: UUID="e7683669-89eb-4952-89bb-2a7e76aa7e63" TYPE="swap" PARTUUID="d2f4adb0-05"
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    _______________________
    fdisk -l | grep "Disk "
    Disk /dev/sda: 59.6 GiB, 64023257088 bytes, 125045424 sectors
    Disk identifier: 0xd2f4adb0
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    _______________________
    cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #
    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions
    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes
    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>
    # definitions of existing MD arrays
    _______________________
    mdadm --detail --scan --verbose
    (no output for above)
    _______________________


    * All Disks are 4 TB SATA, 4 HGST, and 1 WD RED
    * Raid stopped working after reinstall of OMV


    Thanks!



    Recently started to get errors when trying to make changes with the GUI. When I click on apply I get the following error message:


    Code
    ERROR:
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; omv-mkconf halt 2>&1' with exit code '127': /usr/share/openmediavault/mkconf/halt: 57: /etc/default/openmediavault: 4c8bc97f-1a07-4e3b-ba6b-7f3aea225fd1: not found


    I should also mention that at the same time I'm not able to update from the update management on the GUI. Example below:


    Would greatly appreciate if anyone could point me in the right direction to find a solution. Would like to upgrade to OMV 4, but thinking I should have to resolve this problem first or risk losing data on the raid array.

    Running OMV 3.0.99 (Erasmus). I had a disk fail on my raid 5 array, and got this email message:


    This message was generated by the smartd daemon running on:


    host name:nas


    DNS domain: OMV


    The following warning/error was logged by the smartd daemon:


    Device: /dev/disk/by-id/ata-HGST_HDN726040ALE614_K7H5PPDL [SAT], ATA error count increased from 0 to 8


    Device info:


    HGST HDN726040ALE614, S/N:K7H5PPDL, WWN:5-000cca-269d0aec8, FW:APGNW7JH, 4.00 TB


    For details see host's SYSLOG.


    You can also use the smartctl utility for further investigation.
    Another message will be sent in 24 hours if the problem persists


    Checked it out and /dev/sdc/ had been automatically removed for SMART read errors, and degraded. I purchased a new drive and replaced /dev/sdc, and rebuilt the array. everything is working fine with no data loss.
    However, I am now getting emails about a SparesMissing event:




    This is an automatically generated mail message from mdadm running on nas


    A SparesMissing event had been detected on md device /dev/md127.


    Faithfully yours, etc.


    P.S. The /proc/mdstat file currently contains the following:


    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[0] sdb[4] sde[3] sdf[2] sdc[5]
    15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]


    unused devices: <none>


    Rebuilt array shows clean and no longer degraded, but still get the Spares Missing messages, any ideas?



    Raid information:


    Code
    root@nas:~# blkid
    /dev/sda1: UUID="194f80e2-eb10-4eff-a8b3-3bf634107a24" TYPE="ext4" PARTUUID="4266dfff-01"
    /dev/sda5: UUID="fb11c777-c87a-4d07-a440-9277d1f08864" TYPE="swap" PARTUUID="4266dfff-05"
    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="814b0718-a3a2-6fb0-cc2b-8bf41cafd897" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/md127: LABEL="share" UUID="a0a9808b-f7e5-48fe-9d41-c8c0ff053887" TYPE="ext4"





    Code
    root@nas:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 59.6 GiB, 64023257088 bytes, 125045424 sectors
    Disk identifier: 0x4266dfff
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/md127: 14.6 TiB, 16002609840128 bytes, 31255097344 sectors


    Code
    root@nas:~# mdadm --detail --scan --verbose
    ARRAY /dev/md127 level=raid5 num-devices=5 metadata=1.2 name=NAS:Raid UUID=b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf



    Code
    Disks:
    /dev/sdb HGST HDN724040AL ATA 4TB
    /dev/sdc WDC WD40EFRX-68N ATA 4TB (New)
    /dev/sdd HGST HDN724040AL ATA 4TB
    /dev/sde HGST HDN724040AL ATA 4TB
    /dev/sdf HGST HDN724040AL ATA 4TB


    Hello all,


    I replaced my OMV mobo due to one SATA port failed. Replaced mobo, installed OMV 2.1, shut down and hooked up drives, rebooted and is running fine. I can 'see' my RAID 5 array in the web administrator, and I'm unsure how to reassemble/reconstruct it.


    The Raid Management Detail shows:
    Version : 1.2
    Creation Time : Fri Dec 11 14:38:57 2015
    Raid Level : raid5
    Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
    Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
    Raid Devices : 5
    Total Devices : 5
    Persistence : Superblock is persistent


    Update Time : Sat Dec 31 02:03:00 2016
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : NAS:Raid (local to host NAS)
    UUID : b7aa5a79:f83a5d47:c0d8cffb:ee2411bf
    Events : 7510


    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc
    2 8 80 2 active sync /dev/sdf
    3 8 64 3 active sync /dev/sde
    4 8 16 4 active sync /dev/sdb


    blkid:


    /dev/sdb: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="1a5448ba-49ad-a4c8-36e2-d331c9fa7f63" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sda1: UUID="2765d62b-01e5-48bc-935c-eecb2352dd56" TYPE="ext4"
    /dev/sda5: UUID="006da519-dbf9-41e2-8774-6192670c8b9f" TYPE="swap"
    /dev/sdc: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="b068620f-28aa-8ff4-8312-d3f7a79921db" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sdd: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="ddeb999e-4fa6-8484-7036-afb8c538ef20" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/sde: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="3d0dcbbf-b778-1498-6cdd-93e235f2ce6f" LABEL="NAS:Raid" TYPE="linux_raid_member"
    /dev/md127: LABEL="share" UUID="a0a9808b-f7e5-48fe-9d41-c8c0ff053887" TYPE="ext4"
    /dev/sdf: UUID="b7aa5a79-f83a-5d47-c0d8-cffbee2411bf" UUID_SUB="6441fac5-9e4d-7208-9085-539c804df216" LABEL="NAS:Raid" TYPE="linux_raid_member"


    mdstat:


    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdd[0] sdb[4] sde[3] sdf[2] sdc[1]
    15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]


    It appears that the array is fine. However, since I am a Linux novice, I'm unsure how to rebuild this array from cli. I would appreciate any help from the community so I can avoid making any serious. Thanks in advance for your help.