New 3.x build, old 2.x RAID1 - how

    • OMV 3.x
    • Resolved
    • New 3.x build, old 2.x RAID1 - how

      had a 2.x build with 3TB single disk plus 2+2TB RAID 1 mirror storage + OS disk....
      tried the cli upgrade, no plug ins, still went badly, thru up a lot of errors with end result no GUI at the end and CLI would not let me in.
      so took the advice and went with a clean install..
      Pulled SATA cables from all 3 storage disks (marked them first)
      Reinstalled as 3.x, same hardware..all good.
      Added single 3TB disk, create FS (Ext4), added Shared Folder and SMB Share and User and all fine..
      Problem is with RAID1 disks when I reconnected them.
      I can see old RAID1 disks in Physical Disks page - they are plugged in identically as previous.
      I cannot see any disks (aka 'devices' in gui) to select in RAID Mgt page when Creating an array
      I can see and select either of the two old RAID1 disks in the File System page - but of course selecting, say, EXT4, would blow my data - according to the 'do you really want to format......' warning popup......nope!. selecting any other filesystem at the drop down gives same warning....

      Is there a way to reinstate the array in the newly built 3.x environment please?
      Any assistance would be appreciated..
    • thank you for the excellent pointer - its been a while since I was on the forum and I was not specific enough on my search to find that.

      The resolution - based on the post linked above...

      At this point neither disk is selectable if I create a new array in Raid Management, as described above.

      root@fatnas:~# blkid <-- ran this command and I can see my two RAID1 mirror disks and the raid disk md127

      /dev/sda: UUID="3ef9e0b5-ebc7-53ff-783d-8f322d87e167" UUID_SUB="aab86480-2968-8f70-e0d6-70809717da46" LABEL="FatNAS:RAIDA" TYPE="linux_raid_member"
      /dev/sdb1: LABEL="3TB" UUID="05dd31f9-321b-4ebf-b088-9928defdedbb" TYPE="ext4" PARTUUID="51459904-895e-4977-bc40-ec923f4e43ed"
      /dev/sdd: UUID="3ef9e0b5-ebc7-53ff-783d-8f322d87e167" UUID_SUB="1c12974b-5ea8-42ab-0138-528ee61aa98a" LABEL="FatNAS:RAIDA" TYPE="linux_raid_member"
      /dev/sdc1: UUID="dc2abf96-23f1-4740-9b01-e41424802764" TYPE="ext4" PARTUUID="dd1ec230-01"
      /dev/sdc5: UUID="fb046a99-5036-4766-8d80-58e043a9a3c0" TYPE="swap" PARTUUID="dd1ec230-05"
      /dev/md127: LABEL="raida" UUID="ffd58de6-df75-46e4-9d88-5474a2c37494" TYPE="ext4"

      root@fatnas:~# cat /proc/mdstat <-- and confirmed the array status
      Personalities : [raid1]
      md127 : active (auto-read-only) raid1 sda[0] sdd[1]
      1953383360 blocks super 1.2 [2/2] [UU]

      unused devices: <none>

      root@fatnas:~# mdadm --assemble /dev/md127 /dev/sdd --verbose --force
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sdd is busy - skipping <-- cant do this cause the disk is already in md127

      root@fatnas:~# mdadm --assemble /dev/md127 /dev/sda --verbose --force
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sda is busy - skipping <-- ditto for the other disk

      root@fatnas:~# mdadm --stop /dev/md127<-- so stop it
      mdadm: stopped /dev/md127

      Now lets put humpty back together again, including both disks in the command..... :)

      root@fatnas:~# mdadm --assemble /dev/md127 /dev/sda /dev/sdd --verbose --force
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
      mdadm: added /dev/sdd to /dev/md127 as 1
      mdadm: added /dev/sda to /dev/md127 as 0
      mdadm: /dev/md127 has been started with 2 drives.

      At this stage I navigate to Raid Management in the GUI and the Array is there!..woohoo and clean and both disks showing.
      Went to Filesystems and /dev/md127 is now showing, but unmounted - so mounted it
      Added a shared folder with the desired name
      Added the SMF/CIFs Share, etc.....
      all good and accessible with read write access, etc....

    • draggaj wrote:

      thank you for the excellent pointer
      I´m pleased to be helping you :)
      And also for your detailed description how you have solved the issue. This could be valuable for others with the same problem in the future on the other hand.
      Unfortunately I often read in other threads only short answers like "solved" or get no response at all.
      OMV 3.0.78 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304