RAID Problem after fresh install of OMV3

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID Problem after fresh install of OMV3

      I have an Odroid XU4 NAS with 3 HDs. Two of them where in a RAID1 to be used as a backup for my pictures and the third one was mounted to be used for bittorrent and plex. Forgot to mention that all HD are in EXT4.

      After the installation of the new OMV3, the RAID had to be rebuilt. The webgui detected the two hardisk part of the RAID and started the procedure to rebuild it. It took all the night and today I had the RAID OK, but I can't see the RAID to write data on it.

      Here the details of the RAID:

      cat /proc/mdstat


      Source Code

      1. root@odroid-jessie:~# cat /proc/mdstat
      2. Personalities : [raid1]
      3. md127 : active raid1 sdd[3] sdc[2](F) sda[0]
      4. 976631360 blocks super 1.2 [2/2] [UU]
      5. unused devices: <none>



      blkid


      Source Code

      1. root@odroid-jessie:/# blkid
      2. /dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="boot" UUID="96C3-9298" TYPE="vfat" PARTUUID="000f1766-01"
      3. /dev/mmcblk0p2: LABEL="GameStationTurbo" UUID="e139ce78-9841-40fe-8823-96a304a09859" TYPE="ext4" PARTUUID="000f1766-02"
      4. /dev/sda: UUID="0301e8e5-1ba3-7a22-e101-1fdf80d54c17" UUID_SUB="cfdc0805-38eb-2d0f-a785-4417563503d8" LABEL="odroid:BACKUP" TYPE="linux_raid_member"
      5. /dev/md127: UUID="ff389cf2-0cf3-47db-bdd0-d0000a9a6494" TYPE="ext4"
      6. /dev/sdb1: LABEL="HD3" UUID="db75e0b1-b05a-4038-acf5-8b570602b0c1" TYPE="ext4" PARTUUID="ff63fa79-7d41-4d12-b0cf-528aa71876ef"
      7. /dev/sdd: UUID="0301e8e5-1ba3-7a22-e101-1fdf80d54c17" UUID_SUB="1eaa485c-7a57-7258-bb51-111e74124dab" LABEL="odroid:BACKUP" TYPE="linux_raid_member"
      8. /dev/mmcblk0: PTUUID="000f1766" PTTYPE="dos"


      fdisk -l



      Source Code

      1. root@odroid-jessie:/# fdisk -l
      2. Disk /dev/mmcblk0: 7.3 GiB, 7818182656 bytes, 15269888 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 512 bytes
      5. I/O size (minimum/optimal): 512 bytes / 512 bytes
      6. Disklabel type: dos
      7. Disk identifier: 0x000f1766
      8. Device Boot Start End Sectors Size Id Type
      9. /dev/mmcblk0p1 2048 147455 145408 71M c W95 FAT32 (LBA)
      10. /dev/mmcblk0p2 147456 15269887 15122432 7.2G 83 Linux
      11. Disk /dev/mmcblk0boot1: 4 MiB, 4194304 bytes, 8192 sectors
      12. Units: sectors of 1 * 512 = 512 bytes
      13. Sector size (logical/physical): 512 bytes / 512 bytes
      14. I/O size (minimum/optimal): 512 bytes / 512 bytes
      15. Disk /dev/mmcblk0boot0: 4 MiB, 4194304 bytes, 8192 sectors
      16. Units: sectors of 1 * 512 = 512 bytes
      17. Sector size (logical/physical): 512 bytes / 512 bytes
      18. I/O size (minimum/optimal): 512 bytes / 512 bytes
      19. Disk /dev/sda: 931.5 GiB, 1000204877824 bytes, 1953525152 sectors
      20. Units: sectors of 1 * 512 = 512 bytes
      21. Sector size (logical/physical): 512 bytes / 512 bytes
      22. I/O size (minimum/optimal): 512 bytes / 512 bytes
      23. Disk /dev/md127: 931.4 GiB, 1000070512640 bytes, 1953262720 sectors
      24. Units: sectors of 1 * 512 = 512 bytes
      25. Sector size (logical/physical): 512 bytes / 512 bytes
      26. I/O size (minimum/optimal): 512 bytes / 512 bytes
      27. Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      28. Units: sectors of 1 * 512 = 512 bytes
      29. Sector size (logical/physical): 512 bytes / 512 bytes
      30. I/O size (minimum/optimal): 512 bytes / 512 bytes
      31. Disklabel type: gpt
      32. Disk identifier: 6EB81A86-ABF1-4506-ACEC-5A95D4334431
      33. Device Start End Sectors Size Type
      34. /dev/sdb1 2048 976773134 976771087 465.8G Linux filesystem
      35. Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
      36. Units: sectors of 1 * 512 = 512 bytes
      37. Sector size (logical/physical): 512 bytes / 512 bytes
      38. I/O size (minimum/optimal): 512 bytes / 512 bytes
      Display All
      cat /etc/mdadm/mdadm.conf

      Source Code

      1. root@odroid-jessie:/# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/odroid:BACKUP metadata=1.2 name=odroid:BACKUP UUID=0301e8e5:1ba37a22:e1011fdf:80d54c17
      Display All
      mdadm --detail --scan --verbose



      Source Code

      1. root@odroid-jessie:/# mdadm --detail --scan --verbose
      2. ARRAY /dev/md/odroid:BACKUP level=raid1 num-devices=2 metadata=1.2 name=odroid:BACKUP UUID=0301e8e5:1ba37a22:e1011fdf:80d54c17
      3. devices=/dev/sda,/dev/sdd
      Any suggestion of how to fully recover the RAID???

      The post was edited 1 time, last by joaquinain ().

    • It probably isn't mounted. Try: mount -a

      Just to note... I recommend against using raid on usb hard drives especially on arm devices.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Well something went wrong with the RAID.

      I guess I just mount one of the disc of the RAID and not the RAID itself.

      So this is the state of the RAID now:


      Source Code

      1. Version : 1.2
      2. Creation Time : Sat Mar 5 00:40:36 2016
      3. Raid Level : raid1
      4. Array Size : 976631360 (931.39 GiB 1000.07 GB)
      5. Used Dev Size : 976631360 (931.39 GiB 1000.07 GB)
      6. Raid Devices : 2
      7. Total Devices : 3
      8. Persistence : Superblock is persistent
      9. Update Time : Thu Oct 20 20:00:21 2016
      10. State : clean, degraded
      11. Active Devices : 1
      12. Working Devices : 1
      13. Failed Devices : 2
      14. Spare Devices : 0
      15. Name : odroid:BACKUP
      16. UUID : 0301e8e5:1ba37a22:e1011fdf:80d54c17
      17. Events : 8871
      18. Number Major Minor RaidDevice State
      19. 0 8 0 0 active sync /dev/sda
      20. 2 0 0 2 removed
      21. 2 8 32 - faulty
      22. 3 8 48 - faulty
      Display All
      I can mount one of the HD which actually caused the "degraded" status...

      Please help!!!
    • Well after unmounting one of the HDs of the raid, I did a blkid and I saw that now it was /dev/sde

      So I used mdadm 7dev/md127 --add /dev/sde

      And now the RAID is being selfrecovering again, something that will take around 10 hours.

      Here the output, but at the end I guess I will still not have my RAID monunted.

      So I need some help.


      Source Code

      1. blkid
      2. /dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="boot" UUID="96C3-9298" TYPE="vfat" PARTUUID="000f1766-01"
      3. /dev/mmcblk0p2: LABEL="GameStationTurbo" UUID="e139ce78-9841-40fe-8823-96a304a09859" TYPE="ext4" PARTUUID="000f1766-02"
      4. /dev/sda: UUID="0301e8e5-1ba3-7a22-e101-1fdf80d54c17" UUID_SUB="cfdc0805-38eb-2d0f-a785-4417563503d8" LABEL="odroid:BACKUP" TYPE="linux_raid_member"
      5. /dev/md127: UUID="ff389cf2-0cf3-47db-bdd0-d0000a9a6494" TYPE="ext4"
      6. /dev/sde: UUID="0301e8e5-1ba3-7a22-e101-1fdf80d54c17" UUID_SUB="1eaa485c-7a57-7258-bb51-111e74124dab" LABEL="odroid:BACKUP" TYPE="linux_raid_member"
      7. /dev/sdf1: LABEL="HD3" UUID="db75e0b1-b05a-4038-acf5-8b570602b0c1" TYPE="ext4" PARTUUID="ff63fa79-7d41-4d12-b0cf-528aa71876ef"
      8. /dev/mmcblk0: PTUUID="000f1766" PTTYPE="dos"
      9. root@odroid-jessie:~# mdadm /dev/md127 --add /dev/sde
      10. mdadm: added /dev/sde
      Regards.
    • have a question or two :-).

      is there any data on the raided drives at the moment?

      is there any data on the third drive ?

      are the drives the same size (obviously the two that were raided are but the third, is it same or bigger )

      now,
      if there is no data on drives or you have a full backup.
      just do a full partition table dump and recreate on both drives
      and create new raid. from scratch.
      OR
      if data will fit on the third drive , move it there and do the whole raid from scratch again.

      now if there is data on drives and you have no spare,
      as we know that both drives are mirror.

      mount one of them, make sure you can see and read the data ok.

      un-mount the second.
      do fdisk , new partition table , new partition etc.

      create new single disk degraded raid array.
      if I remember correctly : mdadm --create /dev/mdX -l raid1 -f -n 1 /dev/sda1 !! change the mdX to what ever device you want it to be, this will give us a single device raid-1 array.

      grub the UUID "mdadm --detail /dev/md0 |grep UUID"
      and add it to the " /etc/mdadm/mdadm.conf"
      or see if OMV finds and see it as raid device.

      if all is ok expand the array to 2 devices "mdadm --grow /dev/md0 -n 2 "
      this will turn it to a degraded 2 device raid.

      mount it and copy the data onto the new raid.

      when all data is moved to new raid, un-mount the old disk, do a whole fdisk bit on it

      and add it to the new degraded array "mdadm --manage /dev/mdX --add /dev/sdb1"

      let it re-silver and presto you have a nice raid 1 to work with.

      here is the link to a how-to I found
      omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
      SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
      PSU: Silencer 760 Watt ATX Power Supply
      IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
      OS on 2×120 SSD in RAID-1 |
      DATA: 3x3T| 4x2T | 2x1T
    • At the end I figured out what the mess was caused by, and guss who...

      ...it was me who almost destroy the RAID.

      And the RAID were with full Data (yes I know I am stupid).

      The problem started because my RAID has two HD of 1T, and I unpluged one just to be sure that if something was going wrong installing OMV3 I would have the other disk with the data stored.

      When OMV3 went back to life I plugged the second and the thir HD, I used mdadm --add to include in the RAID the second one, cause the TAB manage RAID didn't allow me to do it, and I made a mistake includin /dev/sdd and /dev/sde which didn't exist, so the RAID show that there are 4 HDs when there are only two.

      I sorted out with:

      mdad -r falied

      And now I have the RAID self recovering and I see the RAID ready to be mounted which I will do tomorrow when the RAID is fully restored.

      Thank you very much for your help.