Raid 5 Multiple Partitions?

  • I am trying to set-up OMV 2.2.4 on my HP N40L microserver - I have decided to migrate from Windows Home Server 2011 :)


    All seems to have gone OK as regards the initial install. I have a separate Samsung SSD for the system drive and 4 x 2TB Samsung drives for the data.
    I am trying to create a RAID 5 with these 4 drives. The disks have been previously wiped.


    However, I am confused as to what is actually happening as when I create the raid, the process starts and says it will finish in around 400 minutes.
    The display shows the following:

    Code
    Name      Device       State     Level     Capacity      Devices
    NAS03:0  /dev/md0p3    false     Raid 5    250.87 GiB    /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
    NAS03:0  /dev/md0p2    false     Raid 5    790.74 GiB    /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
    NAS03:0  /dev/md0p1    false     Raid 5    1.84 TiB      /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
    NAS03:0  /dev/md0      *****     Raid 5    5.46 TiB      /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd
    
    
    ***** = 'active, resyncing (5.9% (118208684/1953382912) finish=409.7min speed=67102K/sec


    Having reviewed the documentation, I am confused as to why it appears to be setting up 4 devices (or partitions?) - I was just expecting one device. ?(


    Is what I am seeing normal and to be expected?


    Thanks for any help/suggestions :)

    • Offizieller Beitrag

    That is very odd. OMV doesn't create partitions when creating raid arrays. Did you have a partitioned raid array on it previously?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • That is very odd. OMV doesn't create partitions when creating raid arrays. Did you have a partitioned raid array on it previously?


    No I didn't. The disks were used in a Windows Home Server 2011 system.
    I used Diskpart on windows to wipe them using the Clean command. So I have no idea why it appears to be creating these partitions.


    Maybe I need to clean the disks back to their initial state using Linux commands - not sure what they would be though?
    I have previously used the Wipe command from within OMV.

    • Offizieller Beitrag

    Post the output of: cat /proc/mdstat

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK - sorry for the delay - I "lost" my root password (so couldn't log in for a command line) and so had to do a re-install and then got hit by the dirtymodules.json bug (all sorted now)!


    So the output is:


    cat /proc/mdstat


    Personalities : [raid6] [raid5] [raid4]


    md0 : active raid5 sdd[3] sdc[2] sdb[1] sda[0]


    5860148736 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]


    [>....................] resync = 1.6% (31824576/1953382912) finish=392.3min speed=81616K/sec



    unused devices: <none>

    The Raid resyncing seems to have restarted of its own accord.

    • Offizieller Beitrag

    Everything looks fine there. I would let if finish and you should be ok.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I wasn't entirely happy having this strange situation and so I changed tack and cancelled the RAID create.
    I re-wiped the disks but instead of just doing a quick wipe, I did a secure wipe of about 30GB on each disk before cancelling the wipe (I hoped that wiping 30GB would be enough to get rid of any strange data on the disks).
    I then restarted from square one and re-installed OMV.
    This time when I came to create the RAID, it seemed to be correct i.e. no additional partitions listed.
    I left it to run overnight and the RAID was complete by this morning showing a state of Clean. ^^


    So I think for my next build (on an HP N54L with 4 x 3TB drives) I will do a complete secure wipe of the disks :)


    Thanks again for the help and suggestions :thumbup:

  • Thanks for the info in this thread, i had the same issue with two disks, one of the disks were brand new, the other disk was from an exsi installation.


    It created 5 partitions when i was creating a raid 1 setup on two disks.


    so i did a secure wipe of 40gb on both disks, and then a quick wipe on both.


    now when i create a new raid, it only does one lun without partitions.


    Br
    Patric

    • Offizieller Beitrag

    now when i create a new raid, it only does one lun without partitions.

    That's correct as the raid function in OMV uses the 'whole' disk not partitions, it is possible to do this but you would have to use gparted to create the partitions then use the cli to create a raid/s array/s using the partitions. This works because I have just set up a test OMV5 on a laptop, using a usb flash drive to boot and use the internal drive to test a raid5.

  • Yes, that's how I would expect it to work.


    I have not seen any raid HW or sw, setup partitions before (especially 5 of them with random sizes)


    Sent from my SM-A505FN using Tapatalk

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!