Creating RAID with partitions, not whole disks

  • Hello,


    I was wondering if its possible to create a RAID from the GUI with partitions instead of whole disks (devices)? I have partitioned the disk manually with fdisk, but when trying to create a RAID via webGUI all I can select are the entire disks, and partitions do not show up. Is this possible to do?



    Best regards,
    Denis

    • Offizieller Beitrag

    OK so it seems like I have to do it manually then :)

    Well, on the other side of the coin, it might not be possible to do it manually either. (I'm fairly sure it's not possible.) The general purposes of RAID is/was, 1 disk pooling, and 2 increasing availability. It does the first well enough and the latter passably, depending on the use case. (I'm not a fan of RAID. I prefer separate, fully independent, backup instead.)


    Striping "partitions", a sub-section of a drive, with sub-sections of other drives, doesn't accomplish any clear purpose. It might also create file system access conflicts.


    On the command line, I've worked with mdadm before (which establishes software RAID arrays). mdadm works with "block devices" which is an entire disk. mdadm might work with pesudo devices but that's for test purposes, not data storage.


    Not being sure about what your intent is, maybe you should take a look at SnapRaid.

    • Offizieller Beitrag

    You have to manually create the array and the filesystem. Then you can mount the filesystem in the OMV web interface. Why would you want to use partitions though?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    You have to manually create the array and the filesystem.

    The implied usage would be, 1 partition on a disk is part of an array. The remainder (either unused or a 2nd file system partion), is not part of the array. Manually created or not, I'm amazed that this is possible.

    • Offizieller Beitrag

    The implied usage would be, 1 partition on a disk is part of an array. The remainder (either unused or a 2nd file system partion), is not part of the array. Manually created or not, I'm amazed that this is possible.

    Yep. The fact that OMV can use this setup is handy when taking disks from a commercial nas and mounting them (temporarily :) ) in an OMV box.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Yep. The fact that OMV can use this setup is handy when taking disks from a commercial nas and mounting them (temporarily :) ) in an OMV box.

    Not to get off topic:
    With what would be "shared" access to a disk, performance would have to suffer. (But a temp recovery operation is not everyday use.) mdadm must have some sizeable buffers and / or wait states built in, to maintain array sync under those conditions.


    We learn something everyday. :)

  • The implied usage would be, 1 partition on a disk is part of an array. The remainder (either unused or a 2nd file system partion), is not part of the array. Manually created or not, I'm amazed that this is possible.

    why are you amazed?
    I have done this setup before. not with OMV but it works.
    MDADM can use partitions as devices.
    you can not do this with hardware raid but with software raids it is possible.
    I had used this before to have a raid-1 bootable OS drive setup.


    you would have a layout like


    Drive1 -> /dev/sda
    /boot -- partition -- no raid
    /sda1 -- main partition for OS -- MD0
    /sda2 -- swap -- MD1



    second drive is the same.


    in fact the only problem I had was to make sure to install grub to both drives as normal Debian and most other distros installer only put grub on one disk, you have to manually force it to second drive after install.


    also it is not usually recommended to put swap on mdadm raided disk , but I have found that if you do not and disk with swap partition fails you are tost.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

    • Offizieller Beitrag

    why are you amazed?I have done this setup before. not with OMV but it works.

    Well, when I was working with RAID back in the day, it was with hardware controllers. We had a discrete boot / OS disk (no RAID) and a RAID array strictly for data. With the controller models we were using, mixing RAID with file systems, using partitions, wasn't possible. It was a whole disk or not at all. Hence my assumption was that software RAID would closely follow the hardware versions that I worked with.
    ((Back in the day, using a controller was desired because disk handling and parity calculation was off loaded from a decidedly slower CPU. That's hardly a consideration anymore with CPU power continuing to climb. In that respect, a software approach to RAID in modern times makes sense.))


    Why amazed? RAID implementations (other than 1) perform parallel reads and and writes across a few to several disks. Depending on the hardware, disks, etc., this can produce a sizable I/O increase.
    For a RAID setup to work alongside of a file system partition, on the same disk, attempts at concurrent reads/writes to the file system partition and to the RAID partition, would reduce performance. A performance hit of some degree, on that particular disk, would be a given. Since RAID must synchronize parallel reads and writes between disks, the slowest disk sets the performance level for the entire array. This issue alone, the performance hit, would tend to negate one of the more positive reasons for doing RAID - faster disk I/O. In this respect, as it seems, mdadm must be able to manage large disparities.


    I have to say that, to be able to mix RAID with file partitions on the same disk, I have a lot more respect for mdadm's capabilities. As noted before, it must have the ability to insert wait states, to keep I/O stream in sync, and control of dynamically sized buffers. That's not a small accomplishment.

  • @flmaxey - while you are correct that this setup might decrease performance, I'm building a home NAS solution with disks i have laying around. Small performance hit will not bother me much. I'm not building a production raid system :)


    Hardware raids must use whole disk for arrays because thats the way they were built. When using software raid, we can do all sort of combinations, and i see that as the real power with soft raid.


    Let me explain what i'm trying to achieve: I have 1 x 3TB disk, 1 x 500GB, and 1 x 350GB. In my home NAS, I want to have a couple of storages, one simple for media (movies, tv shows, etc) and one raid backed for more important stuff like documents, pictures, ... In order to do that, i would have to buy more disks and possibly throw some out.


    With the disks I have, I will partition the 3TB disk into 3 partitions, one will be 350GB, one will be 500GB and one with rest of space. Using mdadm I will create a RAID 1 array with one 350GB disk and one 350GB partition on the 3TB disk. Same will be done for the 500 GB partition/disk. Now I have two RAID 1 arrays that I can use to keep important stuff, and i can use the 2.2TB of space left to store stuff i can afford to lose.


    So back to the topic, I am now sure this cannot be done with OMV web interface, so I created all md arrays manually. Once they are created manually, OMV recognises them, and I can then use md devices to do everything else through the GUI.

    • Offizieller Beitrag

    Let me explain what i'm trying to achieve: I have 1 x 3TB disk, 1 x 500GB, and 1 x 350GB. In my home NAS, I want to have a couple of storages, one simple for media (movies, tv shows, etc) and one raid backed for more important stuff like documents, pictures, ... In order to do that, i would have to buy more disks and possibly throw some out.
    With the disks I have, I will partition the 3TB disk into 3 partitions, one will be 350GB, one will be 500GB and one with rest of space. Using mdadm I will create a RAID 1 array with one 350GB disk and one 350GB partition on the 3TB disk. Same will be done for the 500 GB partition/disk. Now I have two RAID 1 arrays that I can use to keep important stuff, and i can use the 2.2TB of space left to store stuff i can afford to lose.

    Don't misunderstand me, rush131. While I couldn't envision it, what you laid out above makes sense. And if you can do it, why not? :thumbup:


    With the creation of a RAID5 pool, losing everything above 350GB would be the result of doing things the traditional way. With a 3TB drive in the mix, that would be quite a loss. And you're right in that performance is not always of paramount concern on the home front. ((For my own edification, the forum guys set me straight so I learned something from this exchange. mdadm is much more robust than I imagined.))


    And I have to admit, software RAID seems to have serious advantages over some of the old hardware implementations. Differences in drives sizes, cache sizes, spindle speeds and other factors where, back in the day, unheard of.


    On the other hand, what you'll be doing is not without real risks. While I'm speculating here, you're creating a RAID array with drives of vastly different sizes which indicates that these drives have significantly different ages. In itself, this can be problematic. RAID has a tendency to push the oldest, weakest, drive into failure which can lead to losing the entire array.


    Along other lines:
    While I'm not a RAID expert by any stretch of the imagination, I can say this with authority. RAID is not backup. Let me repeat that, RAID is NOT backup. With that in mind, if I was you, I would expect the array to fail so I wouldn't put anything on it that I wanted to keep without solid backup. ((And note that RAID arrays can fail without actual hard drive failures. It happens. This is another reason why I avoid using.))


    In looking at possible alternatives:
    You have enough disks to where you could use them in a JBOD arrangement and, with Rsync, you could synchronize folders between drives which would provide you with 2 or even 3 separate copies of important files. The rest, the files that don't matter, could be housed on one of your drives without redundancy. If drive pooling is what you're after, there are other options like Union FS where, if a single drive fails, the other drives in the pool are completely unaffected. Mixing new drives with older drives, in these scenarios, would have much less impact. If data preservation is your top priority, give redundancy some thought.


    With all that said, I like to experiment too. I dream up scenarios and see if I can make it happen, if it's possible. (That led me to OMV, BTW. I wanted something better than Windows Server so I started looking, experimenting with distro's, and I found it.)
    So, like you, I also experiment on a budget. :D


    In any case, let me welcome you to the forum.
    If you're looking for a great NAS, you're in the right place.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!