Raid 5 Multiple Partitions?

    • OMV 2.x
    • Raid 5 Multiple Partitions?

      I am trying to set-up OMV 2.2.4 on my HP N40L microserver - I have decided to migrate from Windows Home Server 2011 :)

      All seems to have gone OK as regards the initial install. I have a separate Samsung SSD for the system drive and 4 x 2TB Samsung drives for the data.
      I am trying to create a RAID 5 with these 4 drives. The disks have been previously wiped.

      However, I am confused as to what is actually happening as when I create the raid, the process starts and says it will finish in around 400 minutes.
      The display shows the following:

      Source Code

      1. Name Device State Level Capacity Devices
      2. NAS03:0 /dev/md0p3 false Raid 5 250.87 GiB /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
      3. NAS03:0 /dev/md0p2 false Raid 5 790.74 GiB /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
      4. NAS03:0 /dev/md0p1 false Raid 5 1.84 TiB /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
      5. NAS03:0 /dev/md0 ***** Raid 5 5.46 TiB /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
      6. ***** = 'active, resyncing (5.9% (118208684/1953382912) finish=409.7min speed=67102K/sec

      Having reviewed the documentation, I am confused as to why it appears to be setting up 4 devices (or partitions?) - I was just expecting one device. ?(

      Is what I am seeing normal and to be expected?

      Thanks for any help/suggestions :)
    • That is very odd. OMV doesn't create partitions when creating raid arrays. Did you have a partitioned raid array on it previously?
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      That is very odd. OMV doesn't create partitions when creating raid arrays. Did you have a partitioned raid array on it previously?

      No I didn't. The disks were used in a Windows Home Server 2011 system.
      I used Diskpart on windows to wipe them using the Clean command. So I have no idea why it appears to be creating these partitions.

      Maybe I need to clean the disks back to their initial state using Linux commands - not sure what they would be though?
      I have previously used the Wipe command from within OMV.
    • OK - sorry for the delay - I "lost" my root password (so couldn't log in for a command line) and so had to do a re-install and then got hit by the dirtymodules.json bug (all sorted now)!

      So the output is:

      cat /proc/mdstat

      Personalities : [raid6] [raid5] [raid4]

      md0 : active raid5 sdd[3] sdc[2] sdb[1] sda[0]

      5860148736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

      [>....................] resync = 1.6% (31824576/1953382912) finish=392.3min speed=81616K/sec



      unused devices: <none>


      The Raid resyncing seems to have restarted of its own accord.
    • I wasn't entirely happy having this strange situation and so I changed tack and cancelled the RAID create.
      I re-wiped the disks but instead of just doing a quick wipe, I did a secure wipe of about 30GB on each disk before cancelling the wipe (I hoped that wiping 30GB would be enough to get rid of any strange data on the disks).
      I then restarted from square one and re-installed OMV.
      This time when I came to create the RAID, it seemed to be correct i.e. no additional partitions listed.
      I left it to run overnight and the RAID was complete by this morning showing a state of Clean. ^^

      So I think for my next build (on an HP N54L with 4 x 3TB drives) I will do a complete secure wipe of the disks :)

      Thanks again for the help and suggestions :thumbup: